WorldWideScience

Sample records for defined time points

  1. History and Point in Time in Enterprise Applications

    Directory of Open Access Journals (Sweden)

    Constantin Gelu APOSTOL

    2006-01-01

    Full Text Available First part points out the main differences between temporal and non-temporal databases. In the second part, based on identification of the three main categories of time involved in database applications: user-defined time, valid time and transaction time, some relevant solutions for their implementation are discussed, mainly from the point of view of database organization and data access level of enterprise applications. The final part is dedicated to the influences of historical data in the business logic and presentation levels of enterprise applications and in application services, as security, workflow, reporting.

  2. When should we recommend use of dual time-point and delayed time-point imaging techniques in FDG PET?

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Gang [Philadelphia VA Medical Center, Department of Radiology, Philadelphia, PA (United States); Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Torigian, Drew A.; Alavi, Abass [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Zhuang, Hongming [Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States)

    2013-05-15

    FDG PET and PET/CT are now widely used in oncological imaging for tumor characterization, staging, restaging, and response evaluation. However, numerous benign etiologies may cause increased FDG uptake indistinguishable from that of malignancy. Multiple studies have shown that dual time-point imaging (DTPI) of FDG PET may be helpful in differentiating malignancy from benign processes. However, exceptions exist, and some studies have demonstrated significant overlap of FDG uptake patterns between benign and malignant lesions on delayed time-point images. In this review, we summarize our experience and opinions on the value of DTPI and delayed time-point imaging in oncology, with a review of the relevant literature. We believe that the major value of DTPI and delayed time-point imaging is the increased sensitivity due to continued clearance of background activity and continued FDG accumulation in malignant lesions, if the same diagnostic criteria (as in the initial standard single time-point imaging) are used. The specificity of DTPI and delayed time-point imaging depends on multiple factors, including the prevalence of malignancies, the patient population, and the cut-off values (either SUV or retention index) used to define a malignancy. Thus, DTPI and delayed time-point imaging would be more useful if performed for evaluation of lesions in regions with significant background activity clearance over time (such as the liver, the spleen, the mediastinum), and if used in the evaluation of the extent of tumor involvement rather than in the characterization of the nature of any specific lesion. Acute infectious and non-infectious inflammatory lesions remain as the major culprit for diminished diagnostic performance of these approaches (especially in tuberculosis-endemic regions). Tumor heterogeneity may also contribute to inconsistent performance of DTPI. The authors believe that selective use of DTPI and delayed time-point imaging will improve diagnostic accuracy and

  3. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  4. Degree of a isolated real point or a singular complex point on a plane curve defined over Q

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2010-01-01

    Let $X$ be a curve in the affine plane defined by a reduced polynomial of degree $d$ with rational coefficients. Assume that $P$ is an isolated real point or a singular complex point on the curve $X$. The coordinates of $P$ are algebraic numbers over the rationals of degree at most $d^2$. The res......Let $X$ be a curve in the affine plane defined by a reduced polynomial of degree $d$ with rational coefficients. Assume that $P$ is an isolated real point or a singular complex point on the curve $X$. The coordinates of $P$ are algebraic numbers over the rationals of degree at most $d^2...

  5. Point splitting in a curved space-time background

    International Nuclear Information System (INIS)

    Liggatt, P.A.J.; Macfarlane, A.J.

    1979-01-01

    A prescription is given for point splitting in a curved space-time background which is a natural generalization of that familiar in quantum electrodynamics and Yang-Mills theory. It is applied (to establish its validity) to the verification of the gravitational anomaly in the divergence of a fermion axial current. Notable features of the prescription are that it defines a point-split current that can be differentiated straightforwardly, and that it involves a natural way of averaging (four-dimensionally) over the directions of point splitting. The method can extend directly from the spin-1/2 fermion case treated to other cases, e.g., to spin-3/2 Rarita-Schwinger fermions. (author)

  6. Defining obesity cut-off points for migrant South Asians.

    Directory of Open Access Journals (Sweden)

    Laura J Gray

    Full Text Available Body mass index (BMI and waist circumference (WC are used to define cardiovascular and type 2 diabetes risk. We aimed to derive appropriate BMI and WC obesity cut-off points in a migrant South Asian population.4688 White Europeans and 1333 South Asians resident in the UK aged 40-75 years inclusive were screened for type 2 diabetes. Principal components analysis was used to derive a glycaemia, lipid, and a blood pressure factor. Regression models for each factor, adjusted for age and stratified by sex, were used to identify BMI and WC cut-off points in South Asians that correspond to those defined for White Europeans.For South Asian males, derived BMI obesity cut-off points equivalent to 30.0 kg/m(2 in White Europeans were 22.6 kg/m(2 (95% Confidence Interval (95% CI 20.7 kg/m(2 to 24.5 kg/m(2 for the glycaemia factor, 26.0 kg/m(2 (95% CI 24.7 kg/m(2 to 27.3 kg/m(2 for the lipid factor, and 28.4 kg/m(2 (95% CI 26.5 kg/m(2 to 30.4 kg/m(2 for the blood pressure factor. For WC, derived cut-off points for South Asian males equivalent to 102 cm in White Europeans were 83.8 cm (95% CI 79.3 cm to 88.2 cm for the glycaemia factor, 91.4 cm (95% CI 86.9 cm to 95.8 cm for the lipid factor, and 99.3 cm (95% CI 93.3 cm to 105.2 cm for the blood pressure factor. Lower ethnicity cut-off points were seen for females for both BMI and WC.Substantially lower obesity cut-off points are needed in South Asians to detect an equivalent level of dysglycemia and dyslipidemia as observed in White Europeans. South Asian ethnicity could be considered as a similar level of risk as obesity (in White Europeans for the development of type 2 diabetes.

  7. Knee point search using cascading top-k sorting with minimized time complexity.

    Science.gov (United States)

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  8. Value of dual-time-point 18FDG PET-CT imaging on involved-field radiotherapy for hilar and mediastinal metastatic lymph nodes in non-small cell lung cancer

    International Nuclear Information System (INIS)

    Hu Man; Sun Xindong; Liu Ningbo; Gong Heyi; Fu Zheng; Ma Li; Li Xinke; Xu Xiaoqing; Yu Jinming

    2008-01-01

    Objective: To discuss the value of dual-time-point 18 FDG PET-CT imaging on involved-field radiotherapy for hilar and mediastinal metastatic lymph nodes in patients with non-small cell lung cancer (NSCLC). Methods: Fifty-four patients with NSCLC were included in this analysis, including 34 men and 20 women with mean age of 59 (34-76) years. Two sequential PET-CT scans given 3-5 days before surgery were standard single-time-point imaging for the whole body and delayed imaging for the thorax. The pathologic data were used as golden standard to determine the difference between the standard single-time-point and dual-time-point PET-CT imaging in the definition of gross target volume (GTV) of involved-field radiotherapy for metastatic lymph nodes. Results: For hilar metastatic lymph nodes, the GTV defined by single-time-point imaging was consistent with pathologic GTV in 21 patients (39%), comparing with 31 patients (57%) by dual-time-point imaging. Using pathologic data as golden standard, GTV alteration defined by single-time-point imaging had statistically significant difference comparing with that defined by dual-time-point imaging( =519.00, P=0.023). For mediastinal metastatic lymph nodes, the GTV defined by single-time-point imaging was consistent with pathologic GTV in 30 patients (56%), comparing with 36 patients (67%) by dual-time-point imaging. Using pathologic data as golden standard, GTV alteration defined by single-time-point imaging had no statistically significant difference comparing with that defined by dual-time-point imaging (u= 397.50, P=0.616). Conclusions: For patients with NSCLC receiving involved-field radiotherapy, GTV definition for hilar and mediastinal metastatic lymph nodes by dual-time-point imaging is more consistent with that by pathologic data. Dual-time-point imaging has a larger value in terms of target delineation for hilar and mediastinal metastatic lymph nodes. (authors)

  9. Definition of distance for nonlinear time series analysis of marked point process data

    Energy Technology Data Exchange (ETDEWEB)

    Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)

    2017-01-30

    Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.

  10. Definably compact groups definable in real closed fields. I

    OpenAIRE

    Barriga, Eliana

    2017-01-01

    We study definably compact definably connected groups definable in a sufficiently saturated real closed field $R$. We introduce the notion of group-generic point for $\\bigvee$-definable groups and show the existence of group-generic points for definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field. We use this notion along with some properties of generic sets to prove that for every definably compact definably connected group $G$ definable in...

  11. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials).

    Science.gov (United States)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; Van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; De Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-11-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across trials, hampering comparison between them. The aim of the DATECAN (Definition for the Assessment of Time-to-event End-points in CANcer trials)-Pancreas project is to provide guidelines for standardised definition of time-to-event end-points in RCTs for pancreatic cancer. Time-to-event end-points currently used were identified from a literature review of pancreatic RCT trials (2006-2009). Academic research groups were contacted for participation in order to select clinicians and methodologists to participate in the pilot and scoring groups (>30 experts). A consensus was built after 2 rounds of the modified Delphi formal consensus approach with the Rand scoring methodology (range: 1-9). For pancreatic cancer, 14 time to event end-points and 25 distinct event types applied to two settings (detectable disease and/or no detectable disease) were considered relevant and included in the questionnaire sent to 52 selected experts. Thirty experts answered both scoring rounds. A total of 204 events distributed over the 14 end-points were scored. After the first round, consensus was reached for 25 items; after the second consensus was reached for 156 items; and after the face-to-face meeting for 203 items. The formal consensus approach reached the elaboration of guidelines for standardised definitions of time-to-event end-points allowing cross-comparison of RCTs in pancreatic cancer. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Dual-time-point Imaging and Delayed-time-point Fluorodeoxyglucose-PET/Computed Tomography Imaging in Various Clinical Settings

    DEFF Research Database (Denmark)

    Houshmand, Sina; Salavati, Ali; Antonsen Segtnan, Eivind

    2016-01-01

    The techniques of dual-time-point imaging (DTPI) and delayed-time-point imaging, which are mostly being used for distinction between inflammatory and malignant diseases, has increased the specificity of fluorodeoxyglucose (FDG)-PET for diagnosis and prognosis of certain diseases. A gradually incr...

  13. The physical spacetime as a chronostat defining time. (Prolegomena to a future chronodynamics)

    International Nuclear Information System (INIS)

    Krolikowski, W.

    1993-01-01

    The familiar analogy, appearing in the quantum theory, between the time evolution of an isolated system and the thermal equilibrium of a system with a thermostat, is taken at its face value. This leads us to the phenomenological conjecture that, in reality, the so called isolated system may remain in a ''temporal equilibrium'' with the physical spacetime which plays than the role of a ''chronostat'' defining time equal at all space points (in a Minkowski frame of reference). Such a conjecture suggest virtual deviations from this equilibrium and so seems to imply an extension of the first law of thermodynamics as well as of the state equation in the quantum theory. (author). 5 refs

  14. Point-splitting in a curved space-time background. 1 -gravitational contribution to the axial anomaly

    International Nuclear Information System (INIS)

    Liggatt, P.A.J.; Macfarlane, A.J.

    1978-01-01

    A prescription is given for point-splitting in a curved space-time background which is a natural generalization of that familiar in quantum electrodynamics and Yang-Mills theory. It is applied (to establish its validity) to the verification of the gravitational anomaly in the divergence of a fermion axial current. Notable features of the prescription are that it defines a point-split current which can be differentiated straightforwardly, and that it involves a natural way of averaging (four dimensionally) over the directions of point splitting. The method can extend directly from the spin-1/2 fermion case treated to other cases, e.g. to spin -3/2 Rarita-Schwinger fermions. (author)

  15. The timing of control signals underlying fast point-to-point arm movements.

    Science.gov (United States)

    Ghafouri, M; Feldman, A G

    2001-04-01

    It is known that proprioceptive feedback induces muscle activation when the facilitation of appropriate motoneurons exceeds their threshold. In the suprathreshold range, the muscle-reflex system produces torques depending on the position and velocity of the joint segment(s) that the muscle spans. The static component of the torque-position relationship is referred to as the invariant characteristic (IC). According to the equilibrium-point (EP) hypothesis, control systems produce movements by changing the activation thresholds and thus shifting the IC of the appropriate muscles in joint space. This control process upsets the balance between muscle and external torques at the initial limb configuration and, to regain the balance, the limb is forced to establish a new configuration or, if the movement is prevented, a new level of static torques. Taken together, the joint angles and the muscle torques generated at an equilibrium configuration define a single variable called the EP. Thus by shifting the IC, control systems reset the EP. Muscle activation and movement emerge following the EP resetting because of the natural physical tendency of the system to reach equilibrium. Empirical and simulation studies support the notion that the control IC shifts and the resulting EP shifts underlying fast point-to-point arm movements are gradual rather than step-like. However, controversies exist about the duration of these shifts. Some studies suggest that the IC shifts cease with the movement offset. Other studies propose that the IC shifts end early in comparison to the movement duration (approximately, at peak velocity). The purpose of this study was to evaluate the duration of the IC shifts underlying fast point-to-point arm movements. Subjects made fast (hand peak velocity about 1.3 m/s) planar arm movements toward different targets while grasping a handle. Hand forces applied to the handle and shoulder/elbow torques were, respectively, measured from a force sensor placed

  16. User-Defined Clocks in the Real-Time Specification for Java

    DEFF Research Database (Denmark)

    Wellings, Andy; Schoeberl, Martin

    2011-01-01

    This paper analyses the new user-defined clock model that is to be supported in Version 1.1 of the Real-Time Specification for Java (RTSJ). The model is a compromise between the current position, where there is no support for user-defined clocks, and a fully integrated model. The paper investigat...

  17. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.

    Science.gov (United States)

    Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon

    2018-02-28

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  18. Homeless Point-In-Time (2007-2016)

    Data.gov (United States)

    City and County of Durham, North Carolina — These raw data sets contain Point-in-Time (PIT) estimates and national PIT estimates of homelessness as well as national estimates of homelessness by state and...

  19. Definably compact groups definable in real closed fields.II

    OpenAIRE

    Barriga, Eliana

    2017-01-01

    We continue the analysis of definably compact groups definable in a real closed field $\\mathcal{R}$. In [3], we proved that for every definably compact definably connected semialgebraic group $G$ over $\\mathcal{R}$ there are a connected $R$-algebraic group $H$, a definable injective map $\\phi$ from a generic definable neighborhood of the identity of $G$ into the group $H\\left(R\\right)$ of $R$-points of $H$ such that $\\phi$ acts as a group homomorphism inside its domain. The above result and o...

  20. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials)

    NARCIS (Netherlands)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; de Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-01-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across

  1. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility

    Science.gov (United States)

    2018-01-01

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599

  2. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility

    Directory of Open Access Journals (Sweden)

    Agustín Zaballos

    2018-02-01

    Full Text Available Information and communication technologies (ICTs have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  3. Attention flexibly trades off across points in time.

    Science.gov (United States)

    Denison, Rachel N; Heeger, David J; Carrasco, Marisa

    2017-08-01

    Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.

  4. Zero Point of Historical Time

    Directory of Open Access Journals (Sweden)

    R.S. Khakimov

    2014-02-01

    Full Text Available Historical studies are based on the assumption that there is a reference-starting point of the space-time – the Zero point of coordinate system. Due to the bifurcation in the Zero Point, the course of social processes changes sharply and the probabilistic causality replaces the deterministic one. For this reason, changes occur in the structure of social relations and statehood form as well as in the course of the ethnic processes. In such a way emerges a new discourse of the national behavior. With regard to the history of the Tatars and Tatarstan, such bifurcation points occurred in the periods of the formation: 1 of the Turkic Khaganate, which began to exist from the 6th century onward and became a qualitatively new State system that reformatted old elements in the new matrix introducing a new discourse of behavior; 2 of the Volga-Kama Bulgaria, where the rivers (Kama, Volga, Vyatka became the most important trade routes determining the singularity of this State. Here the nomadic culture was connected with the settled one and Islam became the official religion in 922; 3 and of the Golden Hordе, a powerful State with a remarkable system of communication, migration of huge human resources for thousands of kilometers, and extensive trade, that caused severe “mutations” in the ethnic terms and a huge mixing of ethnic groups. Given the dwelling space of Tatar population and its evolution within Russia, it can be argued that the Zero point of Tatar history, which conveyed the cultural invariants until today, begins in the Golden Horde. Neither in the Turkic khaganate nor in the Bulgar State, but namely in the Golden Horde. Despite the radical changes, the Russian Empire failed to transform the Tatars in the Russians. Therefore, contemporary Tatars preserved the Golden Horde tradition as a cultural invariant.

  5. Dual time point 18FDG-PET/CT versus single time point 18FDG-PET/CT for the differential diagnosis of pulmonary nodules - A meta-analysis

    International Nuclear Information System (INIS)

    Zhang, Li; Wang, Yinzhong; Lei, Junqiang; Tian, Jinhui; Zhai, Yanan

    2013-01-01

    Background: Lung cancer is one of the most common cancer types in the world. An accurate diagnosis of lung cancer is crucial for early treatment and management. Purpose: To perform a comprehensive meta-analysis to evaluate the diagnostic performance of dual time point 18F-fluorodexyglucose position emission tomography/computed tomography (FDG-PET/CT) and single time point 18FDG-PET/CT in the diagnosis of pulmonary nodules. Material and Methods: PubMed (1966-2011.11), EMBASE (1974-2011.11), Web of Science (1972-2011.11), Cochrane Library (-2011.11), and four Chinese databases; CBM (1978-2011.11), CNKI (1994-2011.11), VIP (1989-2011.11), and Wanfang Database (1994-2011.11) were searched. Summary sensitivity, summary specificity, summary diagnostic odds ratios (DOR), and summary positive likelihood ratios (LR+) and negative likelihood ratios (LR-) were obtained using Meta-Disc software. Summary receiver-operating characteristic (SROC) curves were used to evaluate the diagnostic performance of dual time point 18FDG-PET/CT and single time point 18FDG-PET/CT. Results: The inclusion criteria were fulfilled by eight articles, with a total of 415 patients and 430 pulmonary nodules. Compared with the gold standard (pathology or clinical follow-up), the summary sensitivity of dual time point 18FDG-PET/CT was 79% (95%CI, 74.0 - 84.0%), and its summary specificity was 73% (95%CI, 65.0-79.0%); the summary LR+ was 2.61 (95%CI, 1.96-3.47), and the summary LR- was 0.29 (95%CI, 0.21 - 0.41); the summary DOR was 10.25 (95%CI, 5.79 - 18.14), and the area under the SROC curve (AUC) was 0.8244. The summary sensitivity for single time point 18FDG-PET/CT was 77% (95%CI, 71.9 - 82.3%), and its summary specificity was 59% (95%CI, 50.6 - 66.2%); the summary LR+ was 1.97 (95%CI, 1.32 - 2.93), and the summary LR- was 0.37 (95%CI, 0.29 - 0.49); the summary DOR was 6.39 (95%CI, 3.39 - 12.05), and the AUC was 0.8220. Conclusion: The results indicate that dual time point 18FDG-PET/CT and single

  6. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  7. Time Eigenstates for Potential Functions without Extremal Points

    Directory of Open Access Journals (Sweden)

    Gabino Torres-Vega

    2013-09-01

    Full Text Available In a previous paper, we introduced a way to generate a time coordinate system for classical and quantum systems when the potential function has extremal points. In this paper, we deal with the case in which the potential function has no extremal points at all, and we illustrate the method with the harmonic and linear potentials.

  8. Defining progression in nonmuscle invasive bladder cancer: it is time for a new, standard definition

    NARCIS (Netherlands)

    Lamm, D.; Persad, R.; Brausi, M.; Buckley, R.; Witjes, J.A.; Palou, J.; Bohle, A.; Kamat, A.M.; Colombel, M.; Soloway, M.

    2014-01-01

    PURPOSE: Despite being one of the most important clinical outcomes in nonmuscle invasive bladder cancer, there is currently no standard definition of disease progression. Major clinical trials and meta-analyses have used varying definitions or have failed to define this end point altogether. A

  9. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  10. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  11. Nonlinear triple-point problems on time scales

    Directory of Open Access Journals (Sweden)

    Douglas R. Anderson

    2004-04-01

    Full Text Available We establish the existence of multiple positive solutions to the nonlinear second-order triple-point boundary-value problem on time scales, $$displaylines{ u^{Delta abla}(t+h(tf(t,u(t=0, cr u(a=alpha u(b+delta u^Delta(a,quad eta u(c+gamma u^Delta(c=0 }$$ for $tin[a,c]subsetmathbb{T}$, where $mathbb{T}$ is a time scale, $eta, gamma, deltage 0$ with $Beta+gamma>0$, $0

  12. Discrete-Time Mixing Receiver Architecture for RF-Sampling Software-Defined Radio

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2010-01-01

    Abstract—A discrete-time (DT) mixing architecture for RF-sampling receivers is presented. This architecture makes RF sampling more suitable for software-defined radio (SDR) as it achieves wideband quadrature demodulation and wideband harmonic rejection. The paper consists of two parts. In the first

  13. Change detection in polarimetric SAR data over several time points

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nielsen, Allan Aasbjerg; Skriver, Henning

    2014-01-01

    A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points.......A test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution is introduced. The test statistic is applied successfully to detect change in C-band EMISAR polarimetric SAR data over four time points....

  14. Defining Glaucomatous Optic Neuropathy from a Continuous Measure of Optic Nerve Damage - The Optimal Cut-off Point for Risk-factor Analysis in Population-based Epidemiology

    NARCIS (Netherlands)

    Ramdas, Wishal D.; Rizopoulos, Dimitris; Wolfs, Roger C. W.; Hofman, Albert; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Jansonius, Nomdo M.

    2011-01-01

    Purpose: Diseases characterized by a continuous trait can be defined by setting a cut-off point for the disease measure in question, accepting some misclassification. The 97.5th percentile is commonly used as a cut-off point. However, it is unclear whether this percentile is the optimal cut-off

  15. Fixed-Point Configurable Hardware Components

    Directory of Open Access Journals (Sweden)

    Rocher Romuald

    2006-01-01

    Full Text Available To reduce the gap between the VLSI technology capability and the designer productivity, design reuse based on IP (intellectual properties is commonly used. In terms of arithmetic accuracy, the generated architecture can generally only be configured through the input and output word lengths. In this paper, a new kind of method to optimize fixed-point arithmetic IP has been proposed. The architecture cost is minimized under accuracy constraints defined by the user. Our approach allows exploring the fixed-point search space and the algorithm-level search space to select the optimized structure and fixed-point specification. To significantly reduce the optimization and design times, analytical models are used for the fixed-point optimization process.

  16. Real-time estimation of FLE for point-based registration

    Science.gov (United States)

    Wiles, Andrew D.; Peters, Terry M.

    2009-02-01

    In image-guide surgery, optimizing the accuracy in localizing the surgical tools within the virtual reality environment or 3D image is vitally important, significant effort has been spent reducing the measurement errors at the point of interest or target. This target registration error (TRE) is often defined by a root-mean-square statistic which reduces the vector data to a single term that can be minimized. However, lost in the data reduction is the directionality of the error which, can be modelled using a 3D covariance matrix. Recently, we developed a set of expressions that modeled the TRE statistics for point-based registrations as a function of the fiducial marker geometry, target location and the fiducial localizer error (FLE). Unfortunately, these expressions are only as good as the definition of the FLE. In order to close the gap, we have subsequently developed a closed form expression that estimates the FLE as a function of the estimated fiducial registration error (FRE, the error between the measured fiducials and the best fit locations of those fiducials). The FRE covariance matrix is estimated using a sliding window technique and used as input into the closed form expression to estimate the FLE. The estimated FLE can then used to estimate the TRE which, can be given to the surgeon to permit the procedure to be designed such that the errors associated with the point-based registrations are minimized.

  17. Uniqueness in time measurement

    International Nuclear Information System (INIS)

    Lorenzen, P.

    1981-01-01

    According to P. Janich a clock is defined as an apparatus in which a point ( hand ) is moving uniformly on a straight line ( path ). For the definition of uniformly first the scaling (as a constant ratio of velocities) is defined without clocks. Thereafter the uniqueness of the time measurement can be proved using the prove of scaling of all clocks. But the uniqueness can be defined without scaling, as it is pointed out here. (orig.) [de

  18. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    Science.gov (United States)

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  19. Are self-report measures able to define individuals as physically active or inactive?

    NARCIS (Netherlands)

    Steene-Johannessen, J.; Anderssen, S.A.; Ploeg, H.P. van der; Hendriksen, I.J.M.; Donnelly, A.E.; Brage, S.; Ekelund, U.

    2016-01-01

    Purpose: Assess the agreement between commonly used self-report methods compared with objectively measured physical activity (PA) in defining the prevalence of individuals compliant with PA recommendations. Methods: Time spent in moderate and vigorous PA (MVPA) was measured at two time points in

  20. Title XVI / Supplemental Security Record Point In Time (SSRPT)

    Data.gov (United States)

    Social Security Administration — This is the point-in-time database to house temporary Supplemental Security Record (SSR) images produced during the course of the operating day before they can be...

  1. Statistically defining optimal conditions of coagulation time of skim milk

    International Nuclear Information System (INIS)

    Celebi, M.; Ozdemir, Z.O.; Eroglu, E.; Guney, I

    2014-01-01

    Milk consist huge amount of largely water and different proteins. Kappa-kazein of these milk proteins can be coagulated by Mucor miehei rennet enzyme, is an aspartic protease which cleavege 105 (phenly alanine)-106 (methionine) peptide bond. It is commonly used clotting milk proteins for cheese production in dairy industry. The aim of this study to measure milk clotting times of skim milk by using Mucor Miehei rennet and determination of optimal conditions of milk clotting time by mathematical modelling. In this research, milk clotting times of skim milk were measured at different pHs (3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and temperatures (20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75 degree C). It was used statistical approach for defining best pH and temperature for milk clotting time of skim milk. Milk clotting activity was increase at acidic pHs and high temperatures. (author)

  2. Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields

    KAUST Repository

    Wang, B.

    2013-06-01

    Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  3. Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields

    KAUST Repository

    Wang, B.; Rosen, P.; Skraba, P.; Bhatia, H.; Pascucci, V.

    2013-01-01

    Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  4. Defining tipping points for social-ecological systems scholarship—an interdisciplinary literature review

    Science.gov (United States)

    Milkoreit, Manjana; Hodbod, Jennifer; Baggio, Jacopo; Benessaiah, Karina; Calderón-Contreras, Rafael; Donges, Jonathan F.; Mathias, Jean-Denis; Rocha, Juan Carlos; Schoon, Michael; Werners, Saskia E.

    2018-03-01

    The term tipping point has experienced explosive popularity across multiple disciplines over the last decade. Research on social-ecological systems (SES) has contributed to the growth and diversity of the term’s use. The diverse uses of the term obscure potential differences between tipping behavior in natural and social systems, and issues of causality across natural and social system components in SES. This paper aims to create the foundation for a discussion within the SES research community about the appropriate use of the term tipping point, especially the relatively novel term ‘social tipping point.’ We review existing literature on tipping points and similar concepts (e.g. regime shifts, critical transitions) across all spheres of science published between 1960 and 2016 with a special focus on a recent and still small body of work on social tipping points. We combine quantitative and qualitative analyses in a bibliometric approach, rooted in an expert elicitation process. We find that the term tipping point became popular after the year 2000—long after the terms regime shift and critical transition—across all spheres of science. We identify 23 distinct features of tipping point definitions and their prevalence across disciplines, but find no clear taxonomy of discipline-specific definitions. Building on the most frequently used features, we propose definitions for tipping points in general and social tipping points in SES in particular.

  5. Dual time-point FDG PET/CT for differentiating benign from ...

    African Journals Online (AJOL)

    Maximum standard uptake values (SUVmax) with the greatest uptake in the lesion were calculated for two time points (SUV1 and SUV2), and the percentage change over time per lesion was calculated (%DSUV). Routine histological findings served as the gold standard. Results. Histological examination showed that 14 ...

  6. Evaluation of 2-point, 3-point, and 6-point Dixon magnetic resonance imaging with flexible echo timing for muscle fat quantification.

    Science.gov (United States)

    Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus

    2018-06-01

    The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Computationally determining the salience of decision points for real-time wayfinding support

    Directory of Open Access Journals (Sweden)

    Makoto Takemiya

    2012-06-01

    Full Text Available This study introduces the concept of computational salience to explain the discriminatory efficacy of decision points, which in turn may have applications to providing real-time assistance to users of navigational aids. This research compared algorithms for calculating the computational salience of decision points and validated the results via three methods: high-salience decision points were used to classify wayfinders; salience scores were used to weight a conditional probabilistic scoring function for real-time wayfinder performance classification; and salience scores were correlated with wayfinding-performance metrics. As an exploratory step to linking computational and cognitive salience, a photograph-recognition experiment was conducted. Results reveal a distinction between algorithms useful for determining computational and cognitive saliences. For computational salience, information about the structural integration of decision points is effective, while information about the probability of decision-point traversal shows promise for determining cognitive salience. Limitations from only using structural information and motivations for future work that include non-structural information are elicited.

  8. Age, sex and ethnic differences in the prevalence of underweight and overweight, defined by using the CDC and IOTF cut points in Asian culture

    Science.gov (United States)

    No nationally representative data from middle- and low-income countries have been analyzed to compare the prevalence of underweight and overweight, defined by using the Centers for Disease Control and Prevention (CDC), and the International Obesity TaskForce (IOTF) body mass index cut points. To exa...

  9. An elevated neutrophil-lymphocyte ratio is associated with adverse outcomes following single time-point paracetamol (acetaminophen) overdose: a time-course analysis.

    Science.gov (United States)

    Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J

    2014-09-01

    The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (Pparacetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.

  10. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  11. Noise and time delay induce critical point in a bistable system

    Science.gov (United States)

    Zhang, Jianqiang; Nie, Linru; Yu, Lilong; Zhang, Xinyu

    2014-07-01

    We study relaxation time Tc of time-delayed bistable system driven by two cross-correlated Gaussian white noises that one is multiplicative and the other is additive. By means of numerical calculations, the results indicate that: (i) Combination of noise and time delay can induce two critical points about the relaxation time at some certain noise cross-correlation strength λ under the condition that the multiplicative intensity D equals to the additive noise intensity α. (ii) For each fixed D or α, there are two symmetrical critical points which locates in the regions of positive and negative correlations, respectively. Namely, as λ equals to the critical value λc, Tc is independent of the delay time and the result of Tc versus τ is a horizontal line, but as |λ|>|λc| (or |λ|decreases) with the delay time increasing. (iii) In the presence of D = α, the change of λc with D is two symmetrical curves about the axis of λc = 0, and the critical value λc is close to zero for a smaller D, which approaches to +1 or -1 for a greater D.

  12. The cosmological origin of time asymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Castagnino, Mario [Instituto de Astronomia y Fisica del Espacio, Casilla de Correos 67, Sucursal 28, 1428 Buenos Aires (Argentina); Lara, Luis [Departamento de Fisica, Universidad Nacional de Rosario, Av. Pellegrini 250, 2000 Rosario (Argentina); Lombardi, Olimpia [CONICET - Universidad de Buenos Aires, Puan 470, 1406 Buenos Aires (Argentina)

    2003-01-21

    In this paper, we address the problem of the arrow of time from a cosmological point of view, rejecting the traditional entropic approach that defines the future direction of time as the direction of the entropy increase: from our perspective, the arrow of time has a global origin and it is an intrinsic, geometrical feature of spacetime. Time orientability and the existence of a cosmic time are necessary conditions for defining an arrow of time, which is manifested globally as the time asymmetry of the universe as a whole, and locally as a time-asymmetric energy flux. We also consider arrows of time of different origins (quantum, electromagnetic, thermodynamic, etc) showing that they can be non-conventionally defined only if the geometrical arrow is previously defined.

  13. End points for adjuvant therapy trials: has the time come to accept disease-free survival as a surrogate end point for overall survival?

    Science.gov (United States)

    Gill, Sharlene; Sargent, Daniel

    2006-06-01

    The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.

  14. Measures and time points relevant for post-surgical follow-up in patients with inflammatory arthritis: a pilot study

    Directory of Open Access Journals (Sweden)

    Tägil Magnus

    2009-05-01

    Full Text Available Abstract Background Rheumatic diseases commonly affect joints and other structures in the hand. Surgery is a traditional way to treat hand problems in inflammatory rheumatic diseases with the purposes of pain relief, restore function and prevent progression. There are numerous measures to choose from, and a combination of outcome measures is recommended. This study evaluated if instruments commonly used in rheumatologic clinical practice are suitable to measure outcome of hand surgery and to identify time points relevant for follow-up. Methods Thirty-one patients (median age 56 years, median disease duration 15 years with inflammatory rheumatic disease and need for post-surgical occupational therapy intervention formed this pilot study group. Hand function was assessed regarding grip strength (Grippit, pain (VAS, range of motion (ROM (Signals of Functional Impairment (SOFI and grip ability (Grip Ability Test (GAT. Activities of daily life (ADL were assessed by means of Disabilities of the Arm, Shoulder and Hand Outcome (DASH and Canadian Occupational Performance Measure (COPM. The instruments were evaluated by responsiveness and feasibility; follow-up points were 0, 3, 6 and 12 months. Results All instruments showed significant change at one or more follow-up points. Satisfaction with activities (COPM showed the best responsiveness (SMR>0.8, while ROM measured with SOFI had low responsiveness at most follow-up time points. The responsiveness of the instruments was stable between 6 and 12 month follow-up which imply that 6 month is an appropriate time for evaluating short-term effect of hand surgery in rheumatic diseases. Conclusion We suggest a core set of instruments measuring pain, grip strength, grip ability, perceived symptoms and self-defined daily activities. This study has shown that VAS pain, the Grippit instrument, GAT, DASH symptom scale and COPM are suitable outcome instruments for hand surgery, while SOFI may be a more insensitive

  15. Spatially heterogeneous dynamics investigated via a time-dependent four-point density correlation function

    DEFF Research Database (Denmark)

    Lacevic, N.; Starr, F. W.; Schrøder, Thomas

    2003-01-01

    correlation function g4(r,t) and corresponding "structure factor" S4(q,t) which measure the spatial correlations between the local liquid density at two points in space, each at two different times, and so are sensitive to dynamical heterogeneity. We study g4(r,t) and S4(q,t) via molecular dynamics......Relaxation in supercooled liquids above their glass transition and below the onset temperature of "slow" dynamics involves the correlated motion of neighboring particles. This correlated motion results in the appearance of spatially heterogeneous dynamics or "dynamical heterogeneity." Traditional...... two-point time-dependent density correlation functions, while providing information about the transient "caging" of particles on cooling, are unable to provide sufficiently detailed information about correlated motion and dynamical heterogeneity. Here, we study a four-point, time-dependent density...

  16. Differences in night-time and daytime ambulatory blood pressure when diurnal periods are defined by self-report, fixed-times, and actigraphy: Improving the Detection of Hypertension study.

    Science.gov (United States)

    Booth, John N; Muntner, Paul; Abdalla, Marwah; Diaz, Keith M; Viera, Anthony J; Reynolds, Kristi; Schwartz, Joseph E; Shimbo, Daichi

    2016-02-01

    To determine whether defining diurnal periods by self-report, fixed-time, or actigraphy produce different estimates of night-time and daytime ambulatory blood pressure (ABP). Over a median of 28 days, 330 participants completed two 24-h ABP and actigraphy monitoring periods with sleep diaries. Fixed night-time and daytime periods were defined as 0000-0600 h and 1000-2000 h, respectively. Using the first ABP period, within-individual differences for mean night-time and daytime ABP and kappa statistics for night-time and daytime hypertension (systolic/diastolic ABP≥120/70 mmHg and ≥135/85 mmHg, respectively) were estimated comparing self-report, fixed-time, or actigraphy for defining diurnal periods. Reproducibility of ABP was also estimated. Within-individual mean differences in night-time systolic ABP were small, suggesting little bias, when comparing the three approaches used to define diurnal periods. The distribution of differences, represented by 95% confidence intervals (CI), in night-time systolic and diastolic ABP and daytime systolic and diastolic ABP was narrowest for self-report versus actigraphy. For example, mean differences (95% CI) in night-time systolic ABP for self-report versus fixed-time was -0.53 (-6.61, +5.56) mmHg, self-report versus actigraphy was 0.91 (-3.61, +5.43) mmHg, and fixed-time versus actigraphy was 1.43 (-5.59, +8.46) mmHg. Agreement for night-time and daytime hypertension was highest for self-report versus actigraphy: kappa statistic (95% CI) = 0.91 (0.86,0.96) and 1.00 (0.98,1.00), respectively. The reproducibility of mean ABP and hypertension categories was similar using each approach. Given the high agreement with actigraphy, these data support using self-report to define diurnal periods on ABP monitoring. Further, the use of fixed-time periods may be a reasonable alternative approach.

  17. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  18. Accuracy of multi-point boundary crossing time analysis

    Directory of Open Access Journals (Sweden)

    J. Vogt

    2011-12-01

    Full Text Available Recent multi-spacecraft studies of solar wind discontinuity crossings using the timing (boundary plane triangulation method gave boundary parameter estimates that are significantly different from those of the well-established single-spacecraft minimum variance analysis (MVA technique. A large survey of directional discontinuities in Cluster data turned out to be particularly inconsistent in the sense that multi-point timing analyses did not identify any rotational discontinuities (RDs whereas the MVA results of the individual spacecraft suggested that RDs form the majority of events. To make multi-spacecraft studies of discontinuity crossings more conclusive, the present report addresses the accuracy of the timing approach to boundary parameter estimation. Our error analysis is based on the reciprocal vector formalism and takes into account uncertainties both in crossing times and in the spacecraft positions. A rigorous error estimation scheme is presented for the general case of correlated crossing time errors and arbitrary spacecraft configurations. Crossing time error covariances are determined through cross correlation analyses of the residuals. The principal influence of the spacecraft array geometry on the accuracy of the timing method is illustrated using error formulas for the simplified case of mutually uncorrelated and identical errors at different spacecraft. The full error analysis procedure is demonstrated for a solar wind discontinuity as observed by the Cluster FGM instrument.

  19. Impact of dual-time-point F-18 FDG PET/CT in the assessment of pleural effusion in patients with non-small-cell lung cancer.

    Science.gov (United States)

    Alkhawaldeh, Khaled; Biersack, Hans-J; Henke, Anna; Ezziddin, Samer

    2011-06-01

    The aim of this study was to assess the utility of dual-time-point F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) in differentiating benign from malignant pleural disease, in patients with non-small-cell lung cancer. A total of 61 patients with non-small-cell lung cancer and pleural effusion were included in this retrospective study. All patients had whole-body FDG PET/CT imaging at 60 ± 10 minutes post-FDG injection, whereas 31 patients had second-time delayed imaging repeated at 90 ± 10 minutes for the chest. Maximum standardized uptake values (SUV(max)) and the average percent change in SUV(max) (%SUV) between time point 1 and time point 2 were calculated. Malignancy was defined using the following criteria: (1) visual assessment using 3-points grading scale; (2) SUV(max) ≥2.4; (3) %SUV ≥ +9; and (4) SUV(max) ≥2.4 and/or %SUV ≥ +9. Analysis of variance test and receiver operating characteristic analysis were used in statistical analysis. P < 0.05 was considered significant. Follow-up revealed 29 patient with malignant pleural disease and 31 patients with benign pleural effusion. The average SUV(max) in malignant effusions was 6.5 ± 4 versus 2.2 ± 0.9 in benign effusions (P < 0.0001). The average %SUV in malignant effusions was +13 ± 10 versus -8 ± 11 in benign effusions (P < 0.0004). Sensitivity, specificity, and accuracy for the 5 criteria were as follows: (1) 86%, 72%, and 79%; (2) 93%, 72%, and 82%; (3) 67%, 94%, and 81%; (4) 100%, 94%, and 97%. Dual-time-point F-18 FDG PET can improve the diagnostic accuracy in differentiating benign from malignant pleural disease, with high sensitivity and good specificity.

  20. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  1. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  2. Free-time and fixed end-point multi-target optimal control theory: Application to quantum computing

    International Nuclear Information System (INIS)

    Mishima, K.; Yamashita, K.

    2011-01-01

    Graphical abstract: The two-state Deutsch-Jozsa algortihm used to demonstrate the utility of free-time and fixed-end point multi-target optimal control theory. Research highlights: → Free-time and fixed-end point multi-target optimal control theory (FRFP-MTOCT) was constructed. → The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. → The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361 (2009) 106]. → The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. → The calculation examples show that our theory is useful for minor adjustment of the external fields. - Abstract: An extension of free-time and fixed end-point optimal control theory (FRFP-OCT) to monotonically convergent free-time and fixed end-point multi-target optimal control theory (FRFP-MTOCT) is presented. The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361, (2009), 106]. The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. The calculation examples show that our theory is useful for minor

  3. MeV gamma-ray observation with a well-defined point spread function based on electron tracking

    Science.gov (United States)

    Takada, A.; Tanimori, T.; Kubo, H.; Mizumoto, T.; Mizumura, Y.; Komura, S.; Kishimoto, T.; Takemura, T.; Yoshikawa, K.; Nakamasu, Y.; Matsuoka, Y.; Oda, M.; Miyamoto, S.; Sonoda, S.; Tomono, D.; Miuchi, K.; Kurosawa, S.; Sawano, T.

    2016-07-01

    The field of MeV gamma-ray astronomy has not opened up until recently owing to imaging difficulties. Compton telescopes and coded-aperture imaging cameras are used as conventional MeV gamma-ray telescopes; however their observations are obstructed by huge background, leading to uncertainty of the point spread function (PSF). Conventional MeV gamma-ray telescopes imaging utilize optimizing algorithms such as the ML-EM method, making it difficult to define the correct PSF, which is the uncertainty of a gamma-ray image on the celestial sphere. Recently, we have defined and evaluated the PSF of an electron-tracking Compton camera (ETCC) and a conventional Compton telescope, and thereby obtained an important result: The PSF strongly depends on the precision of the recoil direction of electron (scatter plane deviation, SPD) and is not equal to the angular resolution measure (ARM). Now, we are constructing a 30 cm-cubic ETCC for a second balloon experiment, Sub-MeV gamma ray Imaging Loaded-on-balloon Experiment: SMILE-II. The current ETCC has an effective area of 1 cm2 at 300 keV, a PSF of 10° at FWHM for 662 keV, and a large field of view of 3 sr. We will upgrade this ETCC to have an effective area of several cm2 and a PSF of 5° using a CF4-based gas. Using the upgraded ETCC, our observation plan for SMILE-II is to map of the electron-positron annihilation line and the 1.8 MeV line from 26Al. In this paper, we will report on the current performance of the ETCC and on our observation plan.

  4. Defining the therapeutic time window for suppressing the inflammatory prostaglandin E2 signaling after status epilepticus

    Science.gov (United States)

    Du, Yifeng; Kemper, Timothy; Qiu, Jiange; Jiang, Jianxiong

    2016-01-01

    Neuroinflammation is a common feature in nearly all neurological and some psychiatric disorders. Resembling its extraneural counterpart, neuroinflammation can be both beneficial and detrimental depending on the responding molecules. The overall effect of inflammation on disease progression is highly dependent on the extent of inflammatory mediator production and the duration of inflammatory induction. The time-dependent aspect of inflammatory responses suggests that the therapeutic time window for quelling neuroinflammation might vary with molecular targets and injury types. Therefore, it is important to define the therapeutic time window for anti-inflammatory therapeutics, as contradicting or negative results might arise when different treatment regimens are utilized even in similar animal models. Herein, we discuss a few critical factors that can help define the therapeutic time window and optimize treatment paradigm for suppressing the cyclooxygenase-2/prostaglandin-mediated inflammation after status epilepticus. These determinants should also be relevant to other anti-inflammatory therapeutic strategies for the CNS diseases. PMID:26689339

  5. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  6. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  7. Different Ultimate Factors Define Timing of Breeding in Two Related Species.

    Directory of Open Access Journals (Sweden)

    Veli-Matti Pakanen

    Full Text Available Correct reproductive timing is crucial for fitness. Breeding phenology even in similar species can differ due to different selective pressures on the timing of reproduction. These selection pressures define species' responses to warming springs. The temporal match-mismatch hypothesis suggests that timing of breeding in animals is selected to match with food availability (synchrony. Alternatively, time-dependent breeding success (the date hypothesis can result from other seasonally deteriorating ecological conditions such as intra- or interspecific competition or predation. We studied the effects of two ultimate factors on the timing of breeding, synchrony and other time-dependent factors (time-dependence, in sympatric populations of two related forest-dwelling passerine species, the great tit (Parus major and the willow tit (Poecile montanus by modelling recruitment with long-term capture-recapture data. We hypothesized that these two factors have different relevance for fitness in these species. We found that local recruitment in both species showed quadratic relationships with both time-dependence and synchrony. However, the importance of these factors was markedly different between the studied species. Caterpillar food played a predominant role in predicting the timing of breeding of the great tit. In contrast, for the willow tit time-dependence modelled as timing in relation to conspecifics was more important for local recruitment than synchrony. High caterpillar biomass experienced during the pre- and post-fledging periods increased local recruitment of both species. These contrasting results confirm that these species experience different selective pressures upon the timing of breeding, and hence responses to climate change may differ. Detailed information about life-history strategies is required to understand the effects of climate change, even in closely related taxa. The temporal match-mismatch hypothesis should be extended to consider

  8. Novel technique for prediction of time points for scheduling of multipurpose batch plants

    CSIR Research Space (South Africa)

    Seid, R

    2012-01-01

    Full Text Available . Consequently this avoids costly computational times due to iterations. In the model by Majozi and Zhu (2001) the sequence constraint that pertains to tasks that consume and produce the same state, the starting time of the consuming task at time point p must...

  9. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    Science.gov (United States)

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  10. Modified mean generation time parameter in the neutron point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, Rodrigo C.; Gonçalves, Alessandro C.; Rosa, Felipe S.S., E-mail: alessandro@nuclear.ufrj.br, E-mail: frosa@if.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    This paper proposes an approximation for the modified point kinetics equations proposed by NUNES et. al, 2015, through the adjustment of a kinetic parameter. This approximation consists of analyzing the terms of the modified point kinetics equations in order to identify the least important ones for the solution, resulting in a modification of the mean generation time parameter that incorporates all influences of the additional terms of the modified kinetics. This approximation is applied on the inverse kinetics, to compare the results with the inverse kinetics from the modified kinetics in order to validate the proposed model. (author)

  11. Modified mean generation time parameter in the neutron point kinetics equations

    International Nuclear Information System (INIS)

    Diniz, Rodrigo C.; Gonçalves, Alessandro C.; Rosa, Felipe S.S.

    2017-01-01

    This paper proposes an approximation for the modified point kinetics equations proposed by NUNES et. al, 2015, through the adjustment of a kinetic parameter. This approximation consists of analyzing the terms of the modified point kinetics equations in order to identify the least important ones for the solution, resulting in a modification of the mean generation time parameter that incorporates all influences of the additional terms of the modified kinetics. This approximation is applied on the inverse kinetics, to compare the results with the inverse kinetics from the modified kinetics in order to validate the proposed model. (author)

  12. Defining local food

    DEFF Research Database (Denmark)

    Eriksen, Safania Normann

    2013-01-01

    Despite evolving local food research, there is no consistent definition of “local food.” Various understandings are utilized, which have resulted in a diverse landscape of meaning. The main purpose of this paper is to examine how researchers within the local food systems literature define local...... food, and how these definitions can be used as a starting point to identify a new taxonomy of local food based on three domains of proximity....

  13. The relative timing between eye and hand rapid sequential pointing is affected by time pressure, but not by advance knowledge

    NARCIS (Netherlands)

    Deconinck, F.; van Polanen, V.; Savelsbergh, G.J.P.; Bennett, S.

    2011-01-01

    The present study examined the effect of timing constraints and advance knowledge on eye-hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were

  14. Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis

    Science.gov (United States)

    Arkin, C.; Gillespie, Stacey; Ratzel, Christopher

    2010-01-01

    A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.

  15. Defining human death: an intersection of bioethics and metaphysics.

    Science.gov (United States)

    Manninen, Bertha Alvarez

    2009-01-01

    For many years now, bioethicists, physicians, and others in the medical field have disagreed concerning how to best define human death. Different theories range from the Harvard Criteria of Brain Death, which defines death as the cessation of all brain activity, to the Cognitive Criteria, which is based on the loss of almost all core mental properties, e.g., memory, self-consciousness, moral agency, and the capacity for reason. A middle ground is the Irreversibility Standard, which defines death as occurring when the capacity for consciousness is forever lost. Given all these different theories, how can we begin to approach solving the issue of how to define death? I propose that a necessary starting point is discussing an even more fundamental question that properly belongs in the philosophical field of metaphysics: we must first address the issue of diachronic identity over time, and the persistence conditions of personal identity. In this paper, I illustrate the interdependent relationship between this metaphysical question and questions concerning the definition of death. I also illustrate how it is necessary to antecedently attend to the metaphysical issue of defining death before addressing certain issues in medical ethics, e.g., whether it is morally permissible to euthanize patients in persistent vegetative states or procure organs from anencephalic infants.

  16. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    Science.gov (United States)

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  17. Modular Software-Defined Radio

    Directory of Open Access Journals (Sweden)

    Rhiemeier Arnd-Ragnar

    2005-01-01

    Full Text Available In view of the technical and commercial boundary conditions for software-defined radio (SDR, it is suggestive to reconsider the concept anew from an unconventional point of view. The organizational principles of signal processing (rather than the signal processing algorithms themselves are the main focus of this work on modular software-defined radio. Modularity and flexibility are just two key characteristics of the SDR environment which extend smoothly into the modeling of hardware and software. In particular, the proposed model of signal processing software includes irregular, connected, directed, acyclic graphs with random node weights and random edges. Several approaches for mapping such software to a given hardware are discussed. Taking into account previous findings as well as new results from system simulations presented here, the paper finally concludes with the utility of pipelining as a general design guideline for modular software-defined radio.

  18. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    Science.gov (United States)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  19. Defining time crystals via representation theory

    Science.gov (United States)

    Khemani, Vedika; von Keyserlingk, C. W.; Sondhi, S. L.

    2017-09-01

    Time crystals are proposed states of matter which spontaneously break time translation symmetry. There is no settled definition of such states. We offer a new definition which follows the traditional recipe for Wigner symmetries and order parameters. Supplementing our definition with a few plausible assumptions we find that a) systems with time-independent Hamiltonians should not exhibit time translation symmetry breaking while b) the recently studied π spin glass/Floquet time crystal can be viewed as breaking a global internal symmetry and as breaking time translation symmetry, as befits its two names.

  20. Accuracy Constraint Determination in Fixed-Point System Design

    Directory of Open Access Journals (Sweden)

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  1. Numerical instability of time-discretized one-point kinetic equations

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ikeda, Hideaki; Takeda, Toshikazu

    2000-01-01

    The one-point kinetic equations with numerical errors induced by the explicit, implicit and Crank-Nicolson integration methods are derived. The zero-power transfer functions based on the present equations are demonstrated to investigate the numerical stability of the discretized systems. These demonstrations indicate unconditional stability for the implicit and Crank-Nicolson methods but present the possibility of numerical instability for the explicit method. An upper limit of time mesh spacing for the stability is formulated and several numerical calculations are made to confirm the validity of this formula

  2. Generalized zero point anomaly

    International Nuclear Information System (INIS)

    Nogueira, Jose Alexandre; Maia Junior, Adolfo

    1994-01-01

    It is defined Zero point Anomaly (ZPA) as the difference between the Effective Potential (EP) and the Zero point Energy (ZPE). It is shown, for a massive and interacting scalar field that, in very general conditions, the renormalized ZPA vanishes and then the renormalized EP and ZPE coincide. (author). 3 refs

  3. Subclinical ketosis in post-partum dairy cows fed a predominantly pasture-based diet: defining cut-points for diagnosis using concentrations of beta-hydroxybutyrate in blood and determining prevalence.

    Science.gov (United States)

    Compton, C W R; Young, L; McDougall, S

    2015-09-01

    Firstly, to define, in dairy cows in the first 5 weeks post-calving fed a predominantly pasture-based diet, cut-points of concentrations of beta-hydroxybutyrate (BHBA) in blood, above which there were associations with purulent vaginal discharge (PVD), reduced pregnancy rates (PR) and decreased milk production, in order to better define subclinical ketosis (SCK) in such cattle; and secondly, to determine the prevalence, incidence and risk factors for SCK. An observational field study was conducted in 565 cows from 15 spring-calving and predominantly pasture-fed dairy herds in two regions of New Zealand during the 2010- 2011 dairy season. Within each herd, a cohort of randomly selected cows (approximately 40 per herd) was blood sampled to determine concentrations of BHBA on six occasions at weekly intervals starting within 5 days of calving. The key outcome variables were the presence/absence of PVD at 5 weeks post-calving, PR after 6 weeks (6-week PR) and after the completion of the breeding season (final PR), and mean daily milk solids production. Two cut-points for defining SCK were identified: firstly concentration of BHBA in blood≥1.2 mmol/L within 5 days post-calving, which was associated with an increased diagnosis of PVD (24 vs. 8%); and secondly concentration of BHBA in blood≥1.2 mmol/L at any stage within 5 weeks post-calving, which was associated with decreased 6-week PR (78 vs. 85%). The mean herd-level incidence of SCK within 5 weeks post-calving was 68 (min 12; max 100)% and large variations existed between herds in peak prevalence of SCK and the interval post-calving at which such peaks occurred. Cows>8 years of age and cows losing body condition were at increased risk of SCK within 5 weeks of calving. Cows with concentration of BHBA in blood≥1.2 mmol/L in early lactation had a higher risk of PVD and lower 6-week PR. Cow and herd-level prevalence of SCK varied widely in early lactation. Subclinical ketosis is common and is significantly

  4. Defining Time Crystals via Representation Theory

    OpenAIRE

    Khemani, Vedika; von Keyserlingk, C. W.; Sondhi, S. L.

    2016-01-01

    Time crystals are proposed states of matter which spontaneously break time translation symmetry. There is no settled definition of such states. We offer a new definition which follows the traditional recipe for Wigner symmetries and order parameters. Supplementing our definition with a few plausible assumptions we find that a) systems with time independent Hamiltonians should not exhibit TTSB while b) the recently studied $\\pi$ spin glass/Floquet time crystal can be viewed as breaking a globa...

  5. Dual time point FDG-PET/CT imaging...; Potential tool for diagnosis of breast cancer

    International Nuclear Information System (INIS)

    Zytoon, A.A.; Murakami, K.; El-Kholy, M.R.; El-Shorbagy, E.

    2008-01-01

    Aim: This prospective study was designed to assess the utility of the dual time point imaging technique using 2- [ 18 F]-fluoro-2-deoxy-D-glucose (FDG) positron-emission tomography/computed tomography (PET/CT) to detect primary breast cancer and to determine whether it is useful for the detection of small and non-invasive cancers, as well as cancers in dense breast tissue. Methods: One hundred and eleven patients with newly diagnosed breast cancer underwent two sequential PET/CT examinations (dual time point imaging) for preoperative staging. The maximum standardized uptake value (SUVmax) of FDG was measured from both time points. The percentage change in SUVmax (ΔSUVmax%) between time points 1 (SUVmax1) and 2 (SUVmax2) was calculated. The patients were divided into groups: invasive (n = 82), non invasive (n = 29); large (>10 mm; n = 80), small (≤10 mm; n = 31); tumours in dense breasts (n = 61), and tumours in non-dense breasts (n = 50). The tumour:background (T:B) ratios at both time points were measured and the ΔSUVmax%, ΔT:B% values were calculated. All PET study results were correlated with the histopathology results. Results: Of the 111 cancer lesions, 88 (79.3%) showed an increase and 23 (20.7%) showed either no change [10 (9%)] or a decrease [13 (11.7%)] in the SUVmax over time. Of the 111 contralateral normal breasts, nine (8.1%) showed an increase and 102 (91.9%) showed either no change [17 (15.3%)] or a decrease [85 (76.6%)] in the SUVmax over time. The mean ± SD of SUVmax1, SUVmax2, Δ%SUVmax were 4.9 ± 3.6, 6.0 ± 4.5, and 22.6 ± 13.1% for invasive cancers, 4.1 ± 3.8, 4.4 ± 4.8, and -2.4 ± 18.5% for non-invasive cancers, 2.3 ± 1.9, 2.7 ± 2.3, and 12.9 ± 21.1% for small cancers, 5.6 ± 3.7, 6.8 ± 4.8, and 17.3 ± 17.1% for large cancers, 4.9 ± 3.7, 5.8 ± 4.8, and 15.1 ± 17.6% for cancers in dense breast, and 4.5 ± 3.6, 5.4 ± 4.5, and 17.2 ± 19.2% for cancers in non-dense breast. The receiver-operating characteristic (ROC) analysis

  6. Digital microwave communication engineering point-to-point microwave systems

    CERN Document Server

    Kizer, George

    2013-01-01

    The first book to cover all engineering aspects of microwave communication path design for the digital age Fixed point-to-point microwave systems provide moderate-capacity digital transmission between well-defined locations. Most popular in situations where fiber optics or satellite communication is impractical, it is commonly used for cellular or PCS site interconnectivity where digital connectivity is needed but not economically available from other sources, and in private networks where reliability is most important. Until now, no book has adequately treated all en

  7. Defining chaos.

    Science.gov (United States)

    Hunt, Brian R; Ott, Edward

    2015-09-01

    In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call "expansion entropy," and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

  8. A point implicit time integration technique for slow transient flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Kadioglu, Samet Y., E-mail: kadioglu@yildiz.edu.tr [Department of Mathematical Engineering, Yildiz Technical University, 34210 Davutpasa-Esenler, Istanbul (Turkey); Berry, Ray A., E-mail: ray.berry@inl.gov [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States); Martineau, Richard C. [Idaho National Laboratory, P.O. Box 1625, MS 3840, Idaho Falls, ID 83415 (United States)

    2015-05-15

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  9. A point implicit time integration technique for slow transient flow problems

    International Nuclear Information System (INIS)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-01-01

    Highlights: • This new method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods. • It is unconditionally stable, as a fully implicit method would be. • It exhibits the simplicity of implementation of an explicit method. • It is specifically designed for slow transient flow problems of long duration such as can occur inside nuclear reactor coolant systems. • Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust. - Abstract: We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very

  10. New Bounds of Ostrowski–Gruss Type Inequality for (k + 1 Points on Time Scales

    Directory of Open Access Journals (Sweden)

    Eze R. Nwaeze

    2017-11-01

    Full Text Available The aim of this paper is to present three new bounds of the Ostrowski--Gr\\"uss type inequality for points $x_0,x_1,x_2,\\cdots,x_k$ on time scales. Our results generalize result of Ng\\^o and Liu, and extend results of Ujevi\\'c to time scales with $(k+1$ points. We apply our results to the continuous, discrete, and quantum calculus to obtain many new interesting inequalities. An example is also considered. The estimates obtained in this paper will be very useful in numerical integration especially for the continuous case.

  11. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  12. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  14. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  15. Goal-Role Integration as Driver for Customer Engagement Behaviours across Touch-points

    DEFF Research Database (Denmark)

    Haurum, Helle; Beckmann, Suzanne C.

    Customers and firms interact at many different touch-points that enable customer engagement behaviour. By adopting a customer-centric approach we investigated through 20 in-depth interviews what drives service customers’ CEB manifestations in touch-points, which the firm either manages, monitors......, or manoeuvres between. The key findings are that (a) CEBs are driven by different forms of goal-role integration across touch-points, (b) customers’ goal-directedness determines the touch-points where CEBs are manifested, (c) customers’ role-playing behaviours determine the nature of CEBs, and (d) customers......’ role-playing behaviours can change across touch-points, contingent upon goal-directedness. Hence, this study provides rich insights into customer-firm encounters at touch-points, which subsequently define and shape the relation over time....

  16. EBT time-dependent point model code: description and user's guide

    International Nuclear Information System (INIS)

    Roberts, J.F.; Uckan, N.A.

    1977-07-01

    A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed

  17. Mathematical points as didactical ideas

    DEFF Research Database (Denmark)

    Mogensen, Arne

    Mathematics teaching in Denmark was recently recommended better organized in sequences with clear mathematical pedagogical goals and a focus on mathematical points. In this paper I define a mathematical point and inform on coding of transcripts in a video based Danish research study on grade 8 te...

  18. Measuring Business Cycle Time.

    OpenAIRE

    Stock, James H

    1987-01-01

    The business cycle analysis of Arthur F. Burns and Wesley C. Mitchell and the National Bureau of Economic Research presumed that aggregate economic variables evolve on a time scale defined by business cycle turning points rather than by months or quarters. Do macroeconomic variables appear to evolve on an economic rather than a calendar time scale? Evidence presented here suggests that they do. However, the estimated economic time scales are only weakly related to business cycle time scales, ...

  19. Point-prevalence survey of healthcare facility-onset healthcare-associated Clostridium difficile infection in Greek hospitals outside the intensive care unit: The C. DEFINE study.

    Science.gov (United States)

    Skoutelis, Athanasios; Pefanis, Angelos; Tsiodras, Sotirios; Sipsas, Nikolaos V; Lelekis, Moyssis; Lazanas, Marios C; Gargalianos, Panagiotis; Dalekos, George N; Roilides, Emmanuel; Samonis, George; Maltezos, Efstratios; Hatzigeorgiou, Dimitrios; Lada, Malvina; Metallidis, Symeon; Stoupis, Athena; Chrysos, Georgios; Karnesis, Lazaros; Symbardi, Styliani; Loupa, Chariclia V; Giamarellou, Helen; Kioumis, Ioannis; Sambatakou, Helen; Tsianos, Epameinondas; Kotsopoulou, Maria; Georgopali, Areti; Liakou, Klairi; Perlorentzou, Stavroula; Levidiotou, Stamatina; Giotsa-Toutouza, Marina; Tsorlini-Christoforidou, Helen; Karaiskos, Ilias; Kouppari, Georgia; Trikka-Graphakos, Eleftheria; Ntrivala, Maria-Anna; Themeli-Digalaki, Kate; Pangalis, Anastasia; Kachrimanidou, Melina; Martsoukou, Maria; Karapsias, Stergios; Panopoulou, Maria; Maraki, Sofia; Orfanou, Anagnostina; Petinaki, Efthymia; Orfanidou, Maria; Baka, Vasiliki; Stylianakis, Antonios; Spiliopoulou, Iris; Smilakou, Stavroula; Zerva, Loukia; Vogiatzakis, Evangelos; Belesiotou, Eleni; Gogos, Charalambos A

    2017-01-01

    The correlation of Clostridium difficile infection (CDI) with in-hospital morbidity is important in hospital settings where broad-spectrum antimicrobial agents are routinely used, such as in Greece. The C. DEFINE study aimed to assess point-prevalence of CDI in Greece during two study periods in 2013. There were two study periods consisting of a single day in March and another in October 2013. Stool samples from all patients hospitalized outside the ICU aged ≥18 years old with diarrhea on each day in 21 and 25 hospitals, respectively, were tested for CDI. Samples were tested for the presence of glutamate dehydrogenase antigen (GDH) and toxins A/B of C. difficile; samples positive for GDH and negative for toxins were further tested by culture and PCR for the presence of toxin genes. An analysis was performed to identify potential risk factors for CDI among patients with diarrhea. 5,536 and 6,523 patients were screened during the first and second study periods, respectively. The respective point-prevalence of CDI in all patients was 5.6 and 3.9 per 10,000 patient bed-days whereas the proportion of CDI among patients with diarrhea was 17% and 14.3%. Logistic regression analysis revealed that solid tumor malignancy [odds ratio (OR) 2.69, 95% confidence interval (CI): 1.18-6.15, p = 0.019] and antimicrobial administration (OR 3.61, 95% CI: 1.03-12.76, p = 0.045) were independent risk factors for CDI development. Charlson's Comorbidity Index (CCI) >6 was also found as a risk factor of marginal statistical significance (OR 2.24, 95% CI: 0.98-5.10). Median time to CDI from hospital admission was shorter with the presence of solid tumor malignancy (3 vs 5 days; p = 0.002) and of CCI >6 (4 vs 6 days, p = 0.009). The point-prevalence of CDI in Greek hospitals was consistent among cases of diarrhea over a 6-month period. Major risk factors were antimicrobial use, solid tumor malignancy and a CCI score >6.

  20. Machine scheduling to minimize weighted completion times the use of the α-point

    CERN Document Server

    Gusmeroli, Nicoló

    2018-01-01

    This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.

  1. Fixed Points on Abstract Structures without the Equality Test

    DEFF Research Database (Denmark)

    Korovina, Margarita

    2002-01-01

    The aim of this talk is to present a study of definability properties of fixed points of effective operators on abstract structures without the equality test. The question of definability of fixed points of -operators on abstract structures with equality was first studied by Gandy, Barwise, Mosch...

  2. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  3. Optimizing the diagnostic power with gastric emptying scintigraphy at multiple time points

    Directory of Open Access Journals (Sweden)

    Gajewski Byron J

    2011-05-01

    Full Text Available Abstract Background Gastric Emptying Scintigraphy (GES at intervals over 4 hours after a standardized radio-labeled meal is commonly regarded as the gold standard for diagnosing gastroparesis. The objectives of this study were: 1 to investigate the best time point and the best combination of multiple time points for diagnosing gastroparesis with repeated GES measures, and 2 to contrast and cross-validate Fisher's Linear Discriminant Analysis (LDA, a rank based Distribution Free (DF approach, and the Classification And Regression Tree (CART model. Methods A total of 320 patients with GES measures at 1, 2, 3, and 4 hour (h after a standard meal using a standardized method were retrospectively collected. Area under the Receiver Operating Characteristic (ROC curve and the rate of false classification through jackknife cross-validation were used for model comparison. Results Due to strong correlation and an abnormality in data distribution, no substantial improvement in diagnostic power was found with the best linear combination by LDA approach even with data transformation. With DF method, the linear combination of 4-h and 3-h increased the Area Under the Curve (AUC and decreased the number of false classifications (0.87; 15.0% over individual time points (0.83, 0.82; 15.6%, 25.3%, for 4-h and 3-h, respectively at a higher sensitivity level (sensitivity = 0.9. The CART model using 4 hourly GES measurements along with patient's age was the most accurate diagnostic tool (AUC = 0.88, false classification = 13.8%. Patients having a 4-h gastric retention value >10% were 5 times more likely to have gastroparesis (179/207 = 86.5% than those with ≤10% (18/113 = 15.9%. Conclusions With a mixed group of patients either referred with suspected gastroparesis or investigated for other reasons, the CART model is more robust than the LDA and DF approaches, capable of accommodating covariate effects and can be generalized for cross institutional applications, but

  4. Defining Political Extremism in the Balkans. The Case of Serbia

    Directory of Open Access Journals (Sweden)

    Babić Marko

    2015-12-01

    Full Text Available Political extremism (and particularly right wing political extremism remains relatively insufficiently explored due to the fact that the phenomenon is controversial and hard to define. Its ambiguity and variability depending on time and spatial point of view further complicates its definition. Its structure is amorphous and eclectic as it often includes elements from different ideologies and connects incompatible ideas. A multidimensional conceptualization and an interdisciplinary approach - sociological, social, psychological and historical, are the Author’s tools in explaining the phenomenon of political extremism in Serbia, hopefully contributing to its clarification and laying a foundation for its further explanatory theoretical studies.

  5. Delta Semantics Defined By Petri Nets

    DEFF Research Database (Denmark)

    Jensen, Kurt; Kyng, Morten; Madsen, Ole Lehrmann

    and the possibility of using predicates to specify state changes. In this paper a formal semantics for Delta is defined and analysed using Petri nets. Petri nets was chosen because the ideas behind Petri nets and Delta concide on several points. A number of proposals for changes in Delta, which resulted from...

  6. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  7. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    Science.gov (United States)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the

  8. Role of Erosion in Shaping Point Bars

    Science.gov (United States)

    Moody, J.; Meade, R.

    2012-04-01

    A powerful metaphor in fluvial geomorphology has been that depositional features such as point bars (and other floodplain features) constitute the river's historical memory in the form of uniformly thick sedimentary deposits waiting for the geomorphologist to dissect and interpret the past. For the past three decades, along the channel of Powder River (Montana USA) we have documented (with annual cross-sectional surveys and pit trenches) the evolution of the shape of three point bars that were created when an extreme flood in 1978 cut new channels across the necks of two former meander bends and radically shifted the location of a third bend. Subsequent erosion has substantially reshaped, at different time scales, the relic sediment deposits of varying age. At the weekly to monthly time scale (i.e., floods from snowmelt or floods from convective or cyclonic storms), the maximum scour depth was computed (by using a numerical model) at locations spaced 1 m apart across the entire point bar for a couple of the largest floods. The maximum predicted scour is about 0.22 m. At the annual time scale, repeated cross-section topographic surveys (25 during 32 years) indicate that net annual erosion at a single location can be as great as 0.5 m, and that the net erosion is greater than net deposition during 8, 16, and 32% of the years for the three point bars. On average, the median annual net erosion was 21, 36, and 51% of the net deposition. At the decadal time scale, an index of point bar preservation often referred to as completeness was defined for each cross section as the percentage of the initial deposit (older than 10 years) that was still remaining in 2011; computations indicate that 19, 41, and 36% of the initial deposits of sediment were eroded. Initial deposits were not uniform in thickness and often represented thicker pods of sediment connected by thin layers of sediment or even isolated pods at different elevations across the point bar in response to multiple

  9. Defining geographic coal markets using price data and shipments data

    International Nuclear Information System (INIS)

    Waarell, Linda

    2005-01-01

    Given the importance of coal in world energy supply an analysis of the relevant geographic market is essential for consumers, producers, as well as for competition policy. The purpose of this paper is to define the relevant economic market for steam and coking coal, and to test the hypothesis of single world markets for these coal products. Methodologically the paper relies on two different tests for defining markets, using both shipments data and price data. The results from both methods point in the same direction. In the case of coking coal the results indicate that the market is essentially global in scope, and also that the market has become more integrated over time. The results for steam coal show that the market is more regional in scope, and there exist no clear tendencies of increased integration over time. One policy implication of the finding that the steam coal market is more regional in scope, and thus that the market boundary is smaller than if the market would have been international, is that a merger and acquisition in this market likely would have been of a more concern for antitrust authorities than the same activity on the coking coal market

  10. DEFINING SPATIAL VIOLENCE. BUCHAREST AS A STUDY CASE

    Directory of Open Access Journals (Sweden)

    Celia GHYKA

    2015-05-01

    Full Text Available The paper looks at the spatial manifestations of violence, aiming to define the category of spatial violence by focusing on the recent urban history of Bucharest; it establishes links with the longer history of natural and inflicted disasters that defined the city, and it explores the spatial, urban, social and symbolical conflicts that occured during the last 25 years, pointing at their consequences on the social and urban substance of the city.

  11. Farey Statistics in Time n^{2/3} and Counting Primitive Lattice Points in Polygons

    OpenAIRE

    Patrascu, Mihai

    2007-01-01

    We present algorithms for computing ranks and order statistics in the Farey sequence, taking time O (n^{2/3}). This improves on the recent algorithms of Pawlewicz [European Symp. Alg. 2007], running in time O (n^{3/4}). We also initiate the study of a more general algorithmic problem: counting primitive lattice points in planar shapes.

  12. Point interactions of the dipole type defined through a three-parametric power regularization

    International Nuclear Information System (INIS)

    Zolotaryuk, A V

    2010-01-01

    A family of point interactions of the dipole type is studied in one dimension using a regularization by rectangles in the form of a barrier and a well separated by a finite distance. The rectangles and the distance are parametrized by a squeezing parameter ε → 0 with three powers μ, ν and τ describing the squeezing rates for the barrier, the well and the distance, respectively. This parametrization allows us to construct a whole family of point potentials of the dipole type including some other point interactions, such as e.g. δ-potentials. Varying the power τ, it is possible to obtain in the zero-range limit the following two cases: (i) the limiting δ'-potential is opaque (the conventional result obtained earlier by some authors) or (ii) this potential admits a resonant tunneling (the opposite result obtained recently by other authors). The structure of resonances (if any) also depends on a regularizing sequence. The sets of the {μ, ν, τ}-space where a non-zero (resonant or non-resonant) transmission occurs are found. For all these cases in the zero-range limit the transfer matrix is shown to be with real parameters χ and g depending on a regularizing sequence. Those cases when χ ≠ 1 and g ≠ 0 mean that the corresponding δ'-potential is accompanied by an effective δ-potential.

  13. Coherent electron focusing with quantum point contacts in a two-dimensional electron gas

    NARCIS (Netherlands)

    Houten, H. van; Beenakker, C.W.J.; Williamson, J.G.; Broekaart, M.E.I.; Loosdrecht, P.H.M. van; Wees, B.J. van; Mooij, J.E.; Foxon, C.T.; Harris, J.J.

    1989-01-01

    Transverse electron focusing in a two-dimensional electron gas is investigated experimentally and theoretically for the first time. A split Schottky gate on top of a GaAs-AlxGa1–xAs heterostructure defines two point contacts of variable width, which are used as injector and collector of ballistic

  14. Change-point analysis of geophysical time-series: application to landslide displacement rate (Séchilienne rock avalanche, France)

    Science.gov (United States)

    Amorese, D.; Grasso, J.-R.; Garambois, S.; Font, M.

    2018-05-01

    The rank-sum multiple change-point method is a robust statistical procedure designed to search for the optimal number and the location of change points in an arbitrary continue or discrete sequence of values. As such, this procedure can be used to analyse time-series data. Twelve years of robust data sets for the Séchilienne (French Alps) rockslide show a continuous increase in average displacement rate from 50 to 280 mm per month, in the 2004-2014 period, followed by a strong decrease back to 50 mm per month in the 2014-2015 period. When possible kinematic phases are tentatively suggested in previous studies, its solely rely on the basis of empirical threshold values. In this paper, we analyse how the use of a statistical algorithm for change-point detection helps to better understand time phases in landslide kinematics. First, we test the efficiency of the statistical algorithm on geophysical benchmark data, these data sets (stream flows and Northern Hemisphere temperatures) being already analysed by independent statistical tools. Second, we apply the method to 12-yr daily time-series of the Séchilienne landslide, for rainfall and displacement data, from 2003 December to 2015 December, in order to quantitatively extract changes in landslide kinematics. We find two strong significant discontinuities in the weekly cumulated rainfall values: an average rainfall rate increase is resolved in 2012 April and a decrease in 2014 August. Four robust changes are highlighted in the displacement time-series (2008 May, 2009 November-December-2010 January, 2012 September and 2014 March), the 2010 one being preceded by a significant but weak rainfall rate increase (in 2009 November). Accordingly, we are able to quantitatively define five kinematic stages for the Séchilienne rock avalanche during this period. The synchronization between the rainfall and displacement rate, only resolved at the end of 2009 and beginning of 2010, corresponds to a remarkable change (fourfold

  15. Clearly Defining Pediatric Massive Transfusion: Cutting Through the Fog and Friction with Combat Data

    Science.gov (United States)

    2015-01-01

    classically been defined as the administration of a large volume of whole blood (WB) or packed red blood cells (PRBCs) over a given time period (e.g...the first 24 hours after injury includingWB, PRBCs, fresh frozen plasma (FFP), platelets (Plt) or cryoprecipitate ( Cryo ). The primary end points were...cluded the demographics of age, weight, sex, and injury mech- anism. Measures of injury severity including Glasgow Coma Scale (GCS), AIS for each body

  16. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  17. Global Stability of Polytopic Linear Time-Varying Dynamic Systems under Time-Varying Point Delays and Impulsive Controls

    Directory of Open Access Journals (Sweden)

    M. de la Sen

    2010-01-01

    Full Text Available This paper investigates the stability properties of a class of dynamic linear systems possessing several linear time-invariant parameterizations (or configurations which conform a linear time-varying polytopic dynamic system with a finite number of time-varying time-differentiable point delays. The parameterizations may be timevarying and with bounded discontinuities and they can be subject to mixed regular plus impulsive controls within a sequence of time instants of zero measure. The polytopic parameterization for the dynamics associated with each delay is specific, so that (q+1 polytopic parameterizations are considered for a system with q delays being also subject to delay-free dynamics. The considered general dynamic system includes, as particular cases, a wide class of switched linear systems whose individual parameterizations are timeinvariant which are governed by a switching rule. However, the dynamic system under consideration is viewed as much more general since it is time-varying with timevarying delays and the bounded discontinuous changes of active parameterizations are generated by impulsive controls in the dynamics and, at the same time, there is not a prescribed set of candidate potential parameterizations.

  18. Clinical Significance of F 18 FP CIT Dual Time Point PET Imaging in Idiopathic Parkinson's Disease

    International Nuclear Information System (INIS)

    Oh, Jin Kyoung; Yoo, Ik Dong; Seo, Ye Young; Chung, Youg An; Yoo, Ie Ryung; Kim, Sung Hoon; Song, In Uk

    2011-01-01

    The purpose of this study was to investigate the diagnostic value of dual time point F 18 FP CIT PET imaging in idiopathic Parkinson's disease (PD). Twenty four patients with PD (mean age 69.6) and 18 healthy people (mean age 70.26) underwent two sequential PET/CT scans (dual time point imaging) at 90 and 210 min after F 18 FP CIT injection. Tracer activity of region of interest was measured in the caudate, putamen and a reference region in the brain from both time points. The outcome parameter was the striatooccipital ratio (SOR). Normal SOR values were obtained in the control group. The percent change in tracer activity between 90 and 210 min images was calculated. The SOR values and the percent change in tracer activity were compared between the patients and healthy control group. The SOR values for the caudate, anterior and posterior putamen at both 90 and 210 min images were significantly reduced in the patients with PD. The lowest P value was obtained for the anterior and posterior putamen (p<0.001) at both time points. There were significant differences of the percent change in tracer activity for the anterior and posterior putamen in the two groups (p=0.01) F 18 FP CIT PET scans at 90 and 210 min after injection are both able to diagnose PD. Therefore, the 90 min image by itself in sufficient for diagnosing PD.

  19. A self-defining hierarchical data system

    Science.gov (United States)

    Bailey, J.

    1992-01-01

    The Self-Defining Data System (SDS) is a system which allows the creation of self-defining hierarchical data structures in a form which allows the data to be moved between different machine architectures. Because the structures are self-defining they can be used for communication between independent modules in a distributed system. Unlike disk-based hierarchical data systems such as Starlink's HDS, SDS works entirely in memory and is very fast. Data structures are created and manipulated as internal dynamic structures in memory managed by SDS itself. A structure may then be exported into a caller supplied memory buffer in a defined external format. This structure can be written as a file or sent as a message to another machine. It remains static in structure until it is reimported into SDS. SDS is written in portable C and has been run on a number of different machine architectures. Structures are portable between machines with SDS looking after conversion of byte order, floating point format, and alignment. A Fortran callable version is also available for some machines.

  20. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  1. Coding ill-defined and unknown cause of death is 13 times more frequent in Denmark than in Finland

    DEFF Research Database (Denmark)

    Ylijoki-Sørensen, Seija; Sajantila, Antti; Lalu, Kaisa

    2014-01-01

    Exact cause and manner of death determination improves legislative safety for the individual and for society and guides aspects of national public health. In the International Classification of Diseases, codes R00-R99 are used for "symptoms, signs and abnormal clinical and laboratory findings......, not elsewhere classified" designated as "ill-defined" or "with unknown etiology". The World Health Organisation recommends avoiding the use of ill-defined and unknown causes of death in the death certificate as this terminology does not give any information concerning the possible conditions that led...... autopsy. Our study suggests that if all deaths in all age groups with unclear cause of death were systematically investigated with a forensic autopsy, only 2-3/1000 deaths per year would be coded as an ill-defined and unknown cause of death in national mortality statistics. At the same time the risk...

  2. A point-of-care chemistry test for reduction of turnaround and clinical decision time.

    Science.gov (United States)

    Lee, Eui Jung; Shin, Sang Do; Song, Kyoung Jun; Kim, Seong Chun; Cho, Jin Seong; Lee, Seung Chul; Park, Ju Ok; Cha, Won Chul

    2011-06-01

    Our study compared clinical decision time between patients managed with a point-of-care chemistry test (POCT) and patients managed with the traditional central laboratory test (CLT). This was a randomized controlled multicenter trial in the emergency departments (EDs) of 5 academic teaching hospitals. We randomly assigned patients to POCT or CLT stratified by the Emergency Severity Index. A POCT chemistry analyzer (Piccolo; Abaxis, Inc, Union City, Calif), which is able to test liver panel, renal panel, pancreas enzymes, lipid panel, electrolytes, and blood gases, was set up in each ED. Primary and secondary end point was turnaround time and door-to-clinical-decision time. The total 2323 patients were randomly assigned to the POCT group (n = 1167) or to the CLT group (n = 1156). All of the basic characteristics were similar in the 2 groups. The turnaround time (median, interquartile range [IQR]) of the POCT group was shorter than that of the CLT group (14, 12-19 versus 55, 45-69 minutes; P CLT group (46, 33-61 versus 86, 68-107 minutes; P CLT group (P CLT. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Calcareous Fens - Source Feature Points

    Data.gov (United States)

    Minnesota Department of Natural Resources — Pursuant to the provisions of Minnesota Statutes, section 103G.223, this database contains points that represent calcareous fens as defined in Minnesota Rules, part...

  4. Hybrid appendectomy with classic trocar on McBurney's point.

    Science.gov (United States)

    Gunes, Mehmet Emin; Ersoz, Feyzullah; Duzkoylu, Yigit; Arikan, Soykan; Cakir, Coskun; Nayci, Ali Emre

    2018-03-01

    Appendectomy is still the most commonly performed intra-abdominal operation worldwide. Interestingly, it has not reached the same popularity as other laparoscopic surgical procedures. Although multiple techniques have been described, a standard approach has not been described for the laparoscopic technique yet. To perform hybrid appendectomy for acute appendicitis on McBurney's point, aiming to perform an easier and quicker procedure while limiting the trauma to the abdominal wall by obtaining the advantages of both laparoscopic and open techniques. We retrospectively evaluated the results of 24 patients on whom we had performed hybrid appendectomy with an optical trocar on McBurney's point for acute appendicitis in 1 year in terms of demographics, operative time, complications, hospital stay and cosmetic results. Twenty-one of the patients underwent hybrid appendectomy with a one-optic trocar on McBurney's point. The mean operative time was 21.4 ±6.2 min. We did not encounter any postoperative complications in any of the patients. The median hospital stay was 1.2 ±1.0 days. The postoperative scar was minimal. This technique is defined in the literature for the first time, and it is easy and feasible for the surgeons. It may reduce the operative time and costs when compared to the conventional laparoscopic technique, but prospective studies with more patients are needed for more certain results.

  5. Heterogeneous dynamics of ionic liquids: A four-point time correlation function approach

    Science.gov (United States)

    Liu, Jiannan; Willcox, Jon A. L.; Kim, Hyung J.

    2018-05-01

    Many ionic liquids show behavior similar to that of glassy systems, e.g., large and long-lasted deviations from Gaussian dynamics and clustering of "mobile" and "immobile" groups of ions. Herein a time-dependent four-point density correlation function—typically used to characterize glassy systems—is implemented for the ionic liquids, choline acetate, and 1-butyl-3-methylimidazolium acetate. Dynamic correlation beyond the first ionic solvation shell on the time scale of nanoseconds is found in the ionic liquids, revealing the cooperative nature of ion motions. The traditional solvent, acetonitrile, on the other hand, shows a much shorter length-scale that decays after a few picoseconds.

  6. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  7. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  8. Guidelines for the definition of time-to-event end points in renal cell cancer clinical trials: results of the DATECAN project†.

    Science.gov (United States)

    Kramar, A; Negrier, S; Sylvester, R; Joniau, S; Mulders, P; Powles, T; Bex, A; Bonnetain, F; Bossi, A; Bracarda, S; Bukowski, R; Catto, J; Choueiri, T K; Crabb, S; Eisen, T; El Demery, M; Fitzpatrick, J; Flamand, V; Goebell, P J; Gravis, G; Houédé, N; Jacqmin, D; Kaplan, R; Malavaud, B; Massard, C; Melichar, B; Mourey, L; Nathan, P; Pasquier, D; Porta, C; Pouessel, D; Quinn, D; Ravaud, A; Rolland, F; Schmidinger, M; Tombal, B; Tosi, D; Vauleon, E; Volpe, A; Wolter, P; Escudier, B; Filleron, T

    2015-12-01

    In clinical trials, the use of intermediate time-to-event end points (TEEs) is increasingly common, yet their choice and definitions are not standardized. This limits the usefulness for comparing treatment effects between studies. The aim of the DATECAN Kidney project is to clarify and recommend definitions of TEE in renal cell cancer (RCC) through a formal consensus method for end point definitions. A formal modified Delphi method was used for establishing consensus. From a 2006-2009 literature review, the Steering Committee (SC) selected 9 TEE and 15 events in the nonmetastatic (NM) and metastatic/advanced (MA) RCC disease settings. Events were scored on the range of 1 (totally disagree to include) to 9 (totally agree to include) in the definition of each end point. Rating Committee (RC) experts were contacted for the scoring rounds. From these results, final recommendations were established for selecting pertinent end points and the associated events. Thirty-four experts scored 121 events for 9 end points. Consensus was reached for 31%, 43% and 85% events during the first, second and third rounds, respectively. The expert recommend the use of three and two endpoints in NM and MA setting, respectively. In the NM setting: disease-free survival (contralateral RCC, appearance of metastases, local or regional recurrence, death from RCC or protocol treatment), metastasis-free survival (appearance of metastases, regional recurrence, death from RCC); and local-regional-free survival (local or regional recurrence, death from RCC). In the MA setting: kidney cancer-specific survival (death from RCC or protocol treatment) and progression-free survival (death from RCC, local, regional, or metastatic progression). The consensus method revealed that intermediate end points have not been well defined, because all of the selected end points had at least one event definition for which no consensus was obtained. These clarified definitions of TEE should become standard practice in

  9. Polar coordinated fuzzy controller based real-time maximum-power point control of photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)

    2009-12-15

    It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)

  10. Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.

    Science.gov (United States)

    Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming

    2017-06-12

    Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.

  11. Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning

    Directory of Open Access Journals (Sweden)

    Junbo Shi

    2017-06-01

    Full Text Available Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm–dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.

  12. A real-time GNSS-R system based on software-defined radio and graphics processing units

    Science.gov (United States)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  13. Joint classification and contour extraction of large 3D point clouds

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  14. Software Defined Cyberinfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle; Chard, Ryan

    2017-07-17

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policies by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.

  15. Causal boundary for stably causal space-times

    International Nuclear Information System (INIS)

    Racz, I.

    1987-12-01

    The usual boundary constructions for space-times often yield an unsatisfactory boundary set. This problem is reviewed and a new solution is proposed. An explicit identification rule is given on the set of the ideal points of the space-time. This construction leads to a satisfactory boundary point set structure for stably causal space-times. The topological properties of the resulting causal boundary construction are examined. For the stably causal space-times each causal curve has a unique endpoint on the boundary set according to the extended Alexandrov topology. The extension of the space-time through the boundary is discussed. To describe the singularities the defined boundary sets have to be separated into two disjoint sets. (D.Gy.) 8 refs

  16. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor

  17. Geospatial exposure to point-of-sale tobacco: real-time craving and smoking-cessation outcomes.

    Science.gov (United States)

    Kirchner, Thomas R; Cantrell, Jennifer; Anesetti-Rothermel, Andrew; Ganz, Ollie; Vallone, Donna M; Abrams, David B

    2013-10-01

    Little is known about the factors that drive the association between point-of-sale marketing and behavior, because methods that directly link individual-level use outcomes to real-world point-of-sale exposure are only now beginning to be developed. Daily outcomes during smoking cessation were examined as a function of both real-time geospatial exposure to point-of-sale tobacco (POST) and subjective craving to smoke. Continuous individual geospatial location data collected over the first month of a smoking-cessation attempt in 2010-2012 (N=475) were overlaid on a POST outlet geodatabase (N=1060). Participants' mobility data were used to quantify the number of times they came into contact with a POST outlet. Participants recorded real-time craving levels and smoking status via ecological momentary assessment (EMA) on cellular telephones. The final data set spanned a total of 12,871 days of EMA and geospatial tracking. Lapsing was significantly more likely on days with any POST contact (OR=1.19, 95% CI=1.18, 1.20), and increasingly likely as the number of daily POST contacts increased (OR=1.07, 95% CI=1.06, 1.08). Overall, daily POST exposure was significantly associated with lapsing when craving was low (OR=1.22, 95% CI=1.20, 1.23); high levels of craving were more directly associated with lapse outcomes. These data shed light on the way mobility patterns drive a dynamic interaction between individuals and the POST environment, demonstrating that quantification of individuals' exposure to POST marketing can be used to identify previously unrecognized patterns of association among individual mobility, the built environment, and behavioral outcomes. © 2013 American Journal of Preventive Medicine.

  18. Experimental demonstration of bandwidth on demand (BoD) provisioning based on time scheduling in software-defined multi-domain optical networks

    Science.gov (United States)

    Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie

    2016-09-01

    A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.

  19. The recovery of a time-dependent point source in a linear transport equation: application to surface water pollution

    International Nuclear Information System (INIS)

    Hamdi, Adel

    2009-01-01

    The aim of this paper is to localize the position of a point source and recover the history of its time-dependent intensity function that is both unknown and constitutes the right-hand side of a 1D linear transport equation. Assuming that the source intensity function vanishes before reaching the final control time, we prove that recording the state with respect to the time at two observation points framing the source region leads to the identification of the source position and the recovery of its intensity function in a unique manner. Note that at least one of the two observation points should be strategic. We establish an identification method that determines quasi-explicitly the source position and transforms the task of recovering its intensity function into solving directly a well-conditioned linear system. Some numerical experiments done on a variant of the water pollution BOD model are presented

  20. Study of time dependence and spectral composition of the signal in circuit of ac electric point motors

    Directory of Open Access Journals (Sweden)

    S. Yu. Buryak

    2014-12-01

    Full Text Available Purpose. The paper is aimed to establish the dependence of changes in the time domain and spectral components of the current in the circuit of the AC electric point motor on its technical condition, to identify the common features for the same type of damage. It is necessary using the analysis of the received signals to carry out the remote diagnosis and determination of faults and defects of electric point motors. In addition it suggested to accelerate the process of the failure, malfunction and damage search. Authors propose the automated approach to the service of remote floor automation equipment, which is located in the envelope of trains. Reduction of the threat to life and health of staff by reducing the residence time in the zone of train movement. Reduce the impact of human factors on the result of service. Methodology. The paper studies the structure, parameters and characteristics, the operation and maintenance characteristics of the AC electric point motors. Determination of the main types of possible faults in the process depending on the operating conditions. Presentation of the electric motor as an object of diagnosis. Findings. The time dependences of the current in the circuit of electric point motor for its various states was obtained. The connection between the technical condition of electric point motor and the performance of current curve in time and spectral domains was established. The revealed deviations from the reference signal were justified. According to the obtained results it was made the conclusion. Originality. A method for diagnosing the state of the AC electric point motor by the time dependence and the spectral composition of the current in its circuit was proposed. The connection diagram to the motor windings based on non-infringement of electric parameters of connection circuit in the actual operating conditions was applied. Practical value. The obtained results suggest the possibility and feasibility of

  1. Combinations of Epoch Durations and Cut-Points to Estimate Sedentary Time and Physical Activity among Adolescents

    Science.gov (United States)

    Fröberg, Andreas; Berg, Christina; Larsson, Christel; Boldemann, Cecilia; Raustorp, Anders

    2017-01-01

    The purpose of the current study was to investigate how combinations of different epoch durations and cut-points affect the estimations of sedentary time and physical activity in adolescents. Accelerometer data from 101 adolescents were derived and 30 combinations were used to estimate sedentary time, light, moderate, vigorous, and combined…

  2. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  3. A flat Chern-Simons gauge theory for (2+1)-dimensional gravity coupled to point particles

    International Nuclear Information System (INIS)

    Grignani, G.; Nardelli, G.

    1991-01-01

    We present a classical ISO (2,1) Chern-Simons gauge theory for planar gravity coupled to point-like sources. The theory is defined in terms of flat coordinates whose relation with the space-time coordinates is established. Though flat, the theory is equivalent to Einstein's as we show explicitly in two examples. (orig.)

  4. Inflection point inflation and time dependent potentials in string theory

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D.

    2007-01-01

    We consider models of inflection point inflation. The main drawback of such models is that they suffer from the overshoot problem. Namely the initial condition should be fine tuned to be near the inflection point for the universe to inflate. We show that stringy realizations of inflection point inflation are common and offer a natural resolution to the overshoot problem

  5. Revealing plant cryptotypes: defining meaningful phenotypes among infinite traits.

    Science.gov (United States)

    Chitwood, Daniel H; Topp, Christopher N

    2015-04-01

    The plant phenotype is infinite. Plants vary morphologically and molecularly over developmental time, in response to the environment, and genetically. Exhaustive phenotyping remains not only out of reach, but is also the limiting factor to interpreting the wealth of genetic information currently available. Although phenotyping methods are always improving, an impasse remains: even if we could measure the entirety of phenotype, how would we interpret it? We propose the concept of cryptotype to describe latent, multivariate phenotypes that maximize the separation of a priori classes. Whether the infinite points comprising a leaf outline or shape descriptors defining root architecture, statistical methods to discern the quantitative essence of an organism will be required as we approach measuring the totality of phenotype. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The motion of a classical spinning point particle in a Riemann-Cartan space-time

    International Nuclear Information System (INIS)

    Amorim, R.

    1983-01-01

    A consistent set of equations of motion for classical charged point particles with spin and magnetic dipole moment in a Riemann-Cartan space-time is generated from a generalized Lagrangean formalism. The equations avoid the spurius free helicoidal solutions and at the same time conserve the canonical condition of normalization of the 4-velocity. The 4-velocity and the mechanical moment are paralell in this theory, where the condition of orthogonality between the spin and the 4-velocity is treated as a non-holonomic one. (Author) [pt

  7. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  8. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  9. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  10. Beginning SharePoint 2010 Administration Windows SharePoint Foundation 2010 and Microsoft SharePoint Server 2010

    CERN Document Server

    Husman, Göran

    2010-01-01

    Complete coverage on the latest advances in SharePoint 2010 administration. SharePoint 2010 comprises an abundance of new features, and this book shows you how to take advantage of all SharePoint 2010's many improvements. Written by a four-time SharePoint MVP, Beginning SharePoint 2010 Administration begins with a comparison of SharePoint 2010 compared to the previous version and then examines the differences between WSS 4.0 and MSS 2010. Packed with step-by-step instructions, tips and tricks, and real-world examples, this book dives into the basics of how to install, manage, and administrate

  11. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-01-01

    AIM: The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. METHODS: Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. RESULTS: Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time. PMID:29731948

  12. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model

    Science.gov (United States)

    Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661

  13. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  14. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    Science.gov (United States)

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  15. Defining Social Class Across Time and Between Groups.

    Science.gov (United States)

    Cohen, Dov; Shin, Faith; Liu, Xi; Ondish, Peter; Kraus, Michael W

    2017-11-01

    We examined changes over four decades and between ethnic groups in how people define their social class. Changes included the increasing importance of income, decreasing importance of occupational prestige, and the demise of the "Victorian bargain," in which poor people who subscribed to conservative sexual and religious norms could think of themselves as middle class. The period also saw changes (among Whites) and continuity (among Black Americans) in subjective status perceptions. For Whites (and particularly poor Whites), their perceptions of enhanced social class were greatly reduced. Poor Whites now view their social class as slightly but significantly lower than their poor Black and Latino counterparts. For Black respondents, a caste-like understanding of social class persisted, as they continued to view their class standing as relatively independent of their achieved education, income, and occupation. Such achievement indicators, however, predicted Black respondents' self-esteem more than they predicted self-esteem for any other group.

  16. Benchmarking and improving point cloud data management in MonetDB

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Van Oosterom, P.J.M.; Goncalves, R.; Tijssen, T.P.M.; Ivanova, M.; Kersten, M.L.; Alvanaki, F.

    2015-01-01

    The popularity, availability and sizes of point cloud data sets are increasing, thus raising interesting data management and processing challenges. Various software solutions are available for the management of point cloud data. A benchmark for point cloud data management systems was defined and it

  17. Neutrosophic Crisp Points & Neutrosophic Crisp Ideals

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2013-03-01

    Full Text Available The purpose of this paper is to define the so called "neutrosophic crisp points" and "neutrosophic crisp ideals", and obtain their fundamental properties. Possible application to GIS topology rules are touched upon.

  18. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Science.gov (United States)

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  19. Use of Time-Frequency Analysis and Neural Networks for Mode Identification in a Wireless Software-Defined Radio Approach

    Directory of Open Access Journals (Sweden)

    Matteo Gandetto

    2004-09-01

    Full Text Available The use of time-frequency distributions is proposed as a nonlinear signal processing technique that is combined with a pattern recognition approach to identify superimposed transmission modes in a reconfigurable wireless terminal based on software-defined radio techniques. In particular, a software-defined radio receiver is described aiming at the identification of two coexistent communication modes: frequency hopping code division multiple access and direct sequence code division multiple access. As a case study, two standards, based on the previous modes and operating in the same band (industrial, scientific, and medical, are considered: IEEE WLAN 802.11b (direct sequence and Bluetooth (frequency hopping. Neural classifiers are used to obtain identification results. A comparison between two different neural classifiers is made in terms of relative error frequency.

  20. Fixed Points in Discrete Models for Regulatory Genetic Networks

    Directory of Open Access Journals (Sweden)

    Orozco Edusmildo

    2007-01-01

    Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.

  1. Expansion of a stochastic stationary optical field at a fixed point

    International Nuclear Information System (INIS)

    Martinez-Herrero, R.; Mejias, P.M.

    1984-01-01

    An important problem in single and multifold photoelectron statistics is to determine the statistical properties of a totally polarized optical field at some point →r from the photoelectron counts registered by the detector. The solution to this problem may be found in the determination of the statistical properties of an integral over a stochastic process; a complicated and formidable task. This problem can be solved in some cases of interest by expanding the process V(t) (which represents the field at →r) in a set of complete orthonormal deterministic functions, resulting in the so-called Karhunen-Loeve expansion of V(t). Two disadvantages are that the process must be defined over a finite time interval, and that each term of the series does not represent any special optical field. Taking into account these limitations of the expansion, the purpose of this work is to find another alternative expansion of stationary optical fields defined over the infinite time interval, and whose terms represent stochastic fields

  2. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time.

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-04-15

    The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  3. Process for structural geologic analysis of topography and point data

    Science.gov (United States)

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  4. Quantum theory with an energy operator defined as a quartic form of the momentum

    Energy Technology Data Exchange (ETDEWEB)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    2016-09-15

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined by the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.

  5. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  6. Vernal Point and Anthropocene

    Science.gov (United States)

    Chavez-Campos, Teodosio; Chavez S, Nadia; Chavez-Sumarriva, Israel

    2014-05-01

    The time scale was based on the internationally recognized formal chronostratigraphical /geochronological subdivisions of time: The Phanerozoic Eonathem/Eon; the Cenozoic Erathem/Era; the Quaternary System/Period; the Pleistocene and Holocene Series/Epoch. The Quaternary was divided into: (1) The Pleistocene that was characterized by cycles of glaciations (intervals between 40,000 and 100,000 years). (2) The Holocene that was an interglacial period that began about 12,000 years ago. It was believed that the Milankovitch cycles (eccentricity, axial tilt and the precession of the equinoxes) were responsible for the glacial and interglacial Holocene periods. The magnetostratigraphic units have been widely used for global correlations valid for Quaternary. The gravitational influence of the sun and moon on the equatorial bulges of the mantle of the rotating earth causes the precession of the earth. The retrograde motion of the vernal point through the zodiacal band is 26,000 years. The Vernal point passes through each constellation in an average of 2000 years and this period of time was correlated to Bond events that were North Atlantic climate fluctuations occurring every ≡1,470 ± 500 years throughout the Holocene. The vernal point retrogrades one precessional degree approximately in 72 years (Gleissberg-cycle) and approximately enters into the Aquarius constellation on March 20, 1940. On earth this entry was verify through: a) stability of the magnetic equator in the south central zone of Peru and in the north zone of Bolivia, b) the greater intensity of equatorial electrojet (EEJ) in Peru and Bolivia since 1940. With the completion of the Holocene and the beginning of the Anthropocene (widely popularized by Paul Crutzen) it was proposed the date of March 20, 1940 as the beginning of the Anthropocene. The date proposed was correlated to the work presented in IUGG (Italy 2007) with the title "Cusco base meridian for the study of geophysical data"; Cusco was

  7. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time

  8. Gene expression pattern at different time points following ALA-PDT

    International Nuclear Information System (INIS)

    Verwanger, T.; Sanovic, R.; Ruhdorfer, S.; Aberger, F.; Frischauf, A.; Krammer, B.

    2003-01-01

    Full text: The photo sensitizer protoporphyrin IX, endogenously accumulated from the precursor aminolevulinic acid (ALA), is a successful agent in photodynamic tumor therapy. In spite of encouraging clinical results, the basic mechanisms leading to cell death are not fully understood. We therefore set out to analyze the alteration of the gene expression pattern in the squamous cell carcinoma cell line A-431 at different time points after photodynamic treatment with endogenous protoporphyrin IX by cDNA-array technique. Cells were incubated for 16 hours with 100 μg/ml ALA and irradiated with a fluence of 3.5 J/cm 2 resulting in 50 % survival until 8 hours post treatment. RNA was isolated at 1.5, 3, 5 and 8 hours post treatment as well as of 3 controls (untreated, light only and dark), radioactively labelled by reverse transcription with 33P-dCTP and hybridized onto macroarray PCR filters containing PCR products of 2135 genes, which were selected for relevance in tumors, stress response and signal transduction. Verification of observed expression changes was carried out by real time PCR. We found a strong induction of expression of immediate early genes like c-fos as well as decreased expression of genes involved in proliferation like myc and the proliferating cell nuclear antigen (PCNA). (author)

  9. Lévy based Cox point processes

    DEFF Research Database (Denmark)

    Hellmund, Gunnar; Prokesová, Michaela; Jensen, Eva Bjørn Vedel

    2008-01-01

    In this paper we introduce Lévy-driven Cox point processes (LCPs) as Cox point processes with driving intensity function Λ defined by a kernel smoothing of a Lévy basis (an independently scattered, infinitely divisible random measure). We also consider log Lévy-driven Cox point processes (LLCPs......) with Λ equal to the exponential of such a kernel smoothing. Special cases are shot noise Cox processes, log Gaussian Cox processes, and log shot noise Cox processes. We study the theoretical properties of Lévy-based Cox processes, including moment properties described by nth-order product densities...

  10. On Defining Mass

    Science.gov (United States)

    Hecht, Eugene

    2011-01-01

    Though central to any pedagogical development of physics, the concept of mass is still not well understood. Properly defining mass has proven to be far more daunting than contemporary textbooks would have us believe. And yet today the origin of mass is one of the most aggressively pursued areas of research in all of physics. Much of the excitement surrounding the Large Hadron Collider at CERN is associated with discovering the mechanism responsible for the masses of the elementary particles. This paper will first briefly examine the leading definitions, pointing out their shortcomings. Then, utilizing relativity theory, it will propose—for consideration by the community of physicists—a conceptual definition of mass predicated on the more fundamental concept of energy, more fundamental in that everything that has mass has energy, yet not everything that has energy has mass.

  11. Application-Defined Decentralized Access Control

    Science.gov (United States)

    Xu, Yuanzhong; Dunn, Alan M.; Hofmann, Owen S.; Lee, Michael Z.; Mehdi, Syed Akbar; Witchel, Emmett

    2014-01-01

    DCAC is a practical OS-level access control system that supports application-defined principals. It allows normal users to perform administrative operations within their privilege, enabling isolation and privilege separation for applications. It does not require centralized policy specification or management, giving applications freedom to manage their principals while the policies are still enforced by the OS. DCAC uses hierarchically-named attributes as a generic framework for user-defined policies such as groups defined by normal users. For both local and networked file systems, its execution time overhead is between 0%–9% on file system microbenchmarks, and under 1% on applications. This paper shows the design and implementation of DCAC, as well as several real-world use cases, including sandboxing applications, enforcing server applications’ security policies, supporting NFS, and authenticating user-defined sub-principals in SSH, all with minimal code changes. PMID:25426493

  12. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  13. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  14. Exact analytical solution of time-independent neutron transport equation, and its applications to systems with a point source

    International Nuclear Information System (INIS)

    Mikata, Y.

    2014-01-01

    Highlights: • An exact solution for the one-speed neutron transport equation is obtained. • This solution as well as its derivation are believed to be new. • Neutron flux for a purely absorbing material with a point neutron source off the origin is obtained. • Spherically as well as cylindrically piecewise constant cross sections are studied. • Neutron flux expressions for a point neutron source off the origin are believed to be new. - Abstract: An exact analytical solution of the time-independent monoenergetic neutron transport equation is obtained in this paper. The solution is applied to systems with a point source. Systematic analysis of the solution of the time-independent neutron transport equation, and its applications represent the primary goal of this paper. To the best of the author’s knowledge, certain key results on the scalar neutron flux as well as their derivations are new. As an application of these results, a scalar neutron flux for a purely absorbing medium with a spherically piecewise constant cross section and an isotropic point neutron source off the origin as well as that for a cylindrically piecewise constant cross section with a point neutron source off the origin are obtained. Both of these results are believed to be new

  15. TMACS I/O termination point listing. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Scaief, C.C. III

    1994-09-13

    This document provides a listing of all analog and discrete input/output (I/O) points connected to the Tank Monitor and Control System (TMACS). The list also provides other information such as the point tag name, termination location, description, drawing references and other parameters. The purpose is to define each point`s unique tag name and to cross reference the point with other associated information that may be necessary for activities such as maintenance, calibration, diagnostics, or design changes. It provides a list in one document of all I/O points that would otherwise only be available by referring to all I/O termination drawings.

  16. Point efficiency of the notion in multi objective programming

    International Nuclear Information System (INIS)

    Kampempe, B.J.D.; Manya, N.L.

    2010-01-01

    The approaches to the problem of multi-objective linear programming stochastic (PLMS) which have been proposed so far in the literature are not really satisfactory (9,11), so we want, in this article, to approach the problem of PLMS using the concept of efficiency point. It is also necessary to define what is meant by efficiency point in the context of PLMS. This is precisely the purpose of this article. In fact, it seeks to provide specific definitions of effective solutions that are not only mathematically consistent, but also have significance to a decision maker faced with such a decision problem. As a result, we have to use the concept of dominance in the time of PLMS, in the context where one has ordinal preference but no utility functions. In this paper, we propose to further explore the concepts of dominance and efficiency point. Indeed, the whole point P effective solutions are usually very broad and as we shall see, it can be identical to X. Accordingly, we will try to relax the definition of dominance relation >p in order to obtain other types of dominance point less demanding and generating subsets may be more effective especially interesting for a decision maker. We shall have to distinguish two other families of dominance relations point : the dominance and dominance scenario test, and within sets of efficient solutions proposed by these last two relations, we will focus on subsets of efficient solutions called sponsored and unanimous. We will study the properties of these various relationships and the possible links between the different effective resulting sets in order to find them and to calculate them explicitly. Finally we will establish some connections between different notions of efficiency and timely concept of Pareto-efficient solution on the deterministic case (PLMD)

  17. A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin

    2014-01-01

    Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.

  18. Decreasing Computational Time for VBBinaryLensing by Point Source Approximation

    Science.gov (United States)

    Tirrell, Bethany M.; Visgaitis, Tiffany A.; Bozza, Valerio

    2018-01-01

    The gravitational lens of a binary system produces a magnification map that is more intricate than a single object lens. This map cannot be calculated analytically and one must rely on computational methods to resolve. There are generally two methods of computing the microlensed flux of a source. One is based on ray-shooting maps (Kayser, Refsdal, & Stabell 1986), while the other method is based on an application of Green’s theorem. This second method finds the area of an image by calculating a Riemann integral along the image contour. VBBinaryLensing is a C++ contour integration code developed by Valerio Bozza, which utilizes this method. The parameters at which the source object could be treated as a point source, or in other words, when the source is far enough from the caustic, was of interest to substantially decrease the computational time. The maximum and minimum values of the caustic curves produced, were examined to determine the boundaries for which this simplification could be made. The code was then run for a number of different maps, with separation values and accuracies ranging from 10-1 to 10-3, to test the theoretical model and determine a safe buffer for which minimal error could be made for the approximation. The determined buffer was 1.5+5q, with q being the mass ratio. The theoretical model and the calculated points worked for all combinations of the separation values and different accuracies except the map with accuracy and separation equal to 10-3 for y1 max. An alternative approach has to be found in order to accommodate a wider range of parameters.

  19. A comparison of the real-time and the imaginary-time formalisms of finite temperature field theory for 2,3, and 4-point Green's functions

    International Nuclear Information System (INIS)

    Aurenche, P.; Becherrawy, T.

    1991-07-01

    The predictions of the real-time and the imaginary-time formalisms of Finite Temperature Field Theory is compared. Retarded and advanced amplitudes are constructed in the real-time formalism which are linear combinations of the usual time-ordered thermo-field dynamics amplitudes. These amplitudes can be easily compared to the various analytically continued amplitudes of the imaginary-time formalism. Explicit calculation of the 2,3 and 4-point Green's functions in φ 3 field theory is done in the one and two-loop approximations, and the compatibility of the two formalisms is shown. (author) 17 refs., 12 figs

  20. Experience of Dual Time Point Brain F-18 FDG PET/CT Imaging in Patients with Infections Disease

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Weung; Kim, Chang Guhn; Park, Soon Ah; Jung, Sang Ah [Wonkwang University School of Medicine, Iksan (Korea, Republic of)

    2010-06-15

    Dual time point FDG PET imaging (DTPI) has been considered helpful for discrimination of benign and malignant disease, and staging lymph node status in patients with pulmonary malignancy. However, DTPI for benign disease has been rarely reported, and it may show a better description of metabolic status and extent of benign infection disease than early imaging only. The authors report on the use F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) imaging with additional delayed imaging on a 52-year-old man with sparganosis and a 70-year-old man with tuberculous meningitis. To the best of our knowledge, this is the first report on dual time point PET/CT imaging in patients with cerebral sparganosis and tuberculous meningitis.

  1. Observation of environmental radioactivity at definite time and definite point

    International Nuclear Information System (INIS)

    Inokoshi, Yukio; Fukuchi, Ryoichi; Irie, Takayuki; Hosoda, Nagako; Okano, Yasuhiro; Shindo, Koutaro

    1990-01-01

    The measurement of environmental radioactivity in Tokyo Metropolis was carried out. The objects of measurement were rainwater, atmospheric floating dusts, spatial dose and the activated sludge in sewage treatment plants. Rainwater, atmospheric floating dusts and spatial dose were analyzed mainly considering radioactive fallout, and activated sludge was analyzed mainly considering radioactive medical matters. For the analysis of nuclides, a Ge(Li) semiconductor detector was used, and spatial dose rate was measured with a DBM type dose rate meter. In activated sludge, the nuclides used for radioactive medicines were found, but in rainwater, atmospheric floating dusts and spatial dose, particular abnormality was not found. The objective of this investigation is to collect over long period at definite time and definite points the data on environmental radioactivity in Tokyo, thus to grasp the level of normal values, and in abnormal case, to clarify the cause and to evaluate the exposure dose. The instruments used, the method of measuring each object and the results are reported. (K.I.)

  2. Defining cyber warfare

    Directory of Open Access Journals (Sweden)

    Dragan D. Mladenović

    2012-04-01

    Full Text Available Cyber conflicts represent a new kind of warfare that is technologically developing very rapidly. Such development results in more frequent and more intensive cyber attacks undertaken by states against adversary targets, with a wide range of diverse operations, from information operations to physical destruction of targets. Nevertheless, cyber warfare is waged through the application of the same means, techniques and methods as those used in cyber criminal, terrorism and intelligence activities. Moreover, it has a very specific nature that enables states to covertly initiate attacks against their adversaries. The starting point in defining doctrines, procedures and standards in the area of cyber warfare is determining its true nature. In this paper, a contribution to this effort was made through the analysis of the existing state doctrines and international practice in the area of cyber warfare towards the determination of its nationally acceptable definition.

  3. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DEFF Research Database (Denmark)

    Kertzscher Schwencke, Gustavo Adolfo Vladimir; Andersen, Claus E.; Tanderup, Kari

    2014-01-01

    Purpose:This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction ......, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-time in vivo point dosimetry....... of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods:In the event of a measured potential treatment error, the AEDA proposes the most...

  4. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    Science.gov (United States)

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  5. Night-time heart rate cut-off point definition by resting office tachycardia in untreated hypertensive patients: data of the Spanish ABPM registry.

    Science.gov (United States)

    Vinyoles, Ernest; de la Sierra, Alejandro; Roso, Albert; de la Cruz, Juan J; Gorostidi, Manuel; Segura, Julián; Banegas, José R; Martell-Claros, Nieves; Ruilope, Luís M

    2014-05-01

    Epidemiological studies have shown that an elevated resting heart rate (HR) is a risk factor for both total and cardiovascular mortality. Our aim was to estimate the night-time HR cut-off point that best predicts cardiovascular risk office tachycardia in hypertensive patients. Untreated hypertensive patients without concomitant cardiovascular diseases were included. Office and ambulatory HRs were measured. Cardiovascular risk office tachycardia was defined by office HR at least 85 beats per minute (bpm). Different night-time HR cut-offs were estimated by receiver operating characteristic curve analyses to predict cardiovascular risk office tachycardia. The best cut-off was selected on the basis of its combined sensitivity and specificity. A total of 32 569 hypertensive patients were included: 46.5% women, mean age (SD) 52 (14) years, office blood pressure 146 (16)/89 (11) mmHg, diabetes 10.3%, smoking 19.2%, BMI 29 (6.8) kg/m, office HR 77 (11.2) bpm, and night-time HR 64.9 (9.3) bpm. A total of 7070 (21.7%) patients were found to have cardiovascular risk office tachycardia. The night-time HR value that better predicted cardiovascular risk office tachycardia was more than 66 bpm. In comparison with patients with night HR below this value, those with night-time tachycardia were predominantly women, younger, with higher ambulatory blood pressure, greater BMI, and higher prevalence of diabetes and smoking. All comparisons were statistically significant (P less than 0.001). A mean night-time HR more than 66 bpm is a good predictor of cardiovascular risk office tachycardia in untreated hypertensive patients and could be considered a variable associated with an increased cardiovascular risk.

  6. Economic Order Quality Model for Determining the Sales Prices of Fresh Goods at Various Points in Time

    Directory of Open Access Journals (Sweden)

    Po-Yu Chen

    2017-01-01

    Full Text Available Although the safe consumption of goods such as food products, medicine, and vaccines is related to their freshness, consumers frequently understand less than suppliers about the freshness of goods when they purchase them. Because of this lack of information, apart from sales prices, consumers refer only to the manufacturing and expiration dates when deciding whether to purchase and how many of these goods to buy. If dealers could determine the sales price at each point in time and customers’ intention to buy goods of varying freshness, then dealers could set an optimal inventory cycle and allocate a weekly sales price for each point in time, thereby maximizing the profit per unit time. Therefore, in this study, an economic order quality model was established to enable discussion of the optimal control of sales prices. The technique for identifying the optimal solution for the model was determined, the characteristics of the optimal solution were demonstrated, and the implications of the solution’s sensitivity analysis were explained.

  7. TMACS I/O termination point listing. Revision 1

    International Nuclear Information System (INIS)

    Scaief, C.C. III.

    1994-01-01

    This document provides a listing of all analog and discrete input/output (I/O) points connected to the Tank Monitor and Control System (TMACS). The list also provides other information such as the point tag name, termination location, description, drawing references and other parameters. The purpose is to define each point's unique tag name and to cross reference the point with other associated information that may be necessary for activities such as maintenance, calibration, diagnostics, or design changes. It provides a list in one document of all I/O points that would otherwise only be available by referring to all I/O termination drawings

  8. Defining Ethical Placemaking for Place-Based Interventions.

    Science.gov (United States)

    Eckenwiler, Lisa A

    2016-11-01

    As place-based interventions expand and evolve, deeper reflection on the meaning of ethical placemaking is essential. I offer a summary account of ethical placemaking, which I propose and define as an ethical ideal and practice for health and for health justice, understood as the capability to be healthy. I point to selected wide-ranging examples-an urban pathway, two long-term care settings, innovations in refugee health services, and a McDonald's restaurant-to help illustrate these ideas.

  9. Vector mass in curved space-times

    International Nuclear Information System (INIS)

    Maia, M.D.

    The use of the Poincare-symmetry appears to be incompatible with the presence of the gravitational field. The consequent problem of the definition of the mass operator is analysed and an alternative definition based on constant curvature tangent spaces is proposed. In the case where the space-time has no killing vector fields, four independent mass operators can be defined at each point. (Author) [pt

  10. Defining Leadership in a Changing Time.

    Science.gov (United States)

    Elwell, Sean M; Elikofer, Amanda N

    2015-01-01

    The purpose of this article is to discuss the difference between leadership and management. Leadership and management have been discussed for many years. Both are important to achieve success in health care, but what does that really mean? Strong leaders possess qualities that inspire others to follow them. This fosters team engagement, goal achievement, and ultimately drives outcomes. Managers plan, organize, and coordinate. It takes dedication, motivation, and passion to be more than a manager and be a good leader. There is not a single correct leadership style, but there are important characteristics that all leaders must demonstrate to get the desired results with the team. In a time when health care is rapidly changing, leadership is important at all levels of an organization.

  11. Defining the economic essence of definition «providing efficiency»

    OpenAIRE

    Kovalchyk, Olena Arnoldivna

    2011-01-01

    In paper the main approaches to defining the economic essence of the category of «efficiency» is analyzed. The existing conceptual approach to the term «providing» is investigated. The own point of view on the interpretation of definition «providing efficiency» is given.

  12. Improved Dynamic Planar Point Location

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Arge, Lars; Georgiadis, Loukas

    2006-01-01

    We develop the first linear-space data structures for dynamic planar point location in general subdivisions that achieve logarithmic query time and poly-logarithmic update time.......We develop the first linear-space data structures for dynamic planar point location in general subdivisions that achieve logarithmic query time and poly-logarithmic update time....

  13. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    Science.gov (United States)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  14. Branch Point Withdrawal in Elongational Startup Flow by Time-Resolved Small Angle Neutron Scattering

    KAUST Repository

    Ruocco, N.

    2016-05-27

    We present a small angle neutron scattering (SANS) investigation of a blend composed of a dendritic polymer and a linear matrix with comparable viscosity in start-up of an elongational flow at Tg + 50. The two-generation dendritic polymer is diluted to 10% by weight in a matrix of a long well-entangled linear chains. Both components consist of mainly 1,4-cis-polyisoprene but differ in isotopic composition. The resulting scattering contrast is sufficiently high to permit time-resolved measurements of the system structure factor during the start-up phase and to follow the retraction processes involving the inner sections of the branched polymer in the nonlinear deformation response. The outer branches and the linear matrix, on the contrary, are in the linear deformation regime. The linear matrix dominates the rheological signature of the blend and the influence of the branched component can barely be detected. However, the neutron scattering intensity is predominantly that of the (branched) minority component so that its dynamics is clearly evident. In the present paper, we use the neutron scattering data to validate the branch point withdrawal process, which could not be unambiguously discerned from rheological measurements in this blend. The maximal tube stretch that the inner branches experience, before the relaxed outer arm material is incorporated into the tube is determined. The in situ scattering experiments demonstrate for the first time the leveling-off of the strain as the result of branch point withdrawal and chain retraction directly on the molecular level. We conclude that branch point motion in the mixture of architecturally complex polymers occurs earlier than would be expected in a purely branched system, presumably due to the different topological environment that the linear matrix presents to the hierarchically deep-buried tube sections. © 2016 American Chemical Society.

  15. Branch Point Withdrawal in Elongational Startup Flow by Time-Resolved Small Angle Neutron Scattering

    KAUST Repository

    Ruocco, N.; Auhl, D.; Bailly, C.; Lindner, P.; Pyckhout-Hintzen, W.; Wischnewski, A.; Leal, L. G.; Hadjichristidis, Nikolaos; Richter, D.

    2016-01-01

    We present a small angle neutron scattering (SANS) investigation of a blend composed of a dendritic polymer and a linear matrix with comparable viscosity in start-up of an elongational flow at Tg + 50. The two-generation dendritic polymer is diluted to 10% by weight in a matrix of a long well-entangled linear chains. Both components consist of mainly 1,4-cis-polyisoprene but differ in isotopic composition. The resulting scattering contrast is sufficiently high to permit time-resolved measurements of the system structure factor during the start-up phase and to follow the retraction processes involving the inner sections of the branched polymer in the nonlinear deformation response. The outer branches and the linear matrix, on the contrary, are in the linear deformation regime. The linear matrix dominates the rheological signature of the blend and the influence of the branched component can barely be detected. However, the neutron scattering intensity is predominantly that of the (branched) minority component so that its dynamics is clearly evident. In the present paper, we use the neutron scattering data to validate the branch point withdrawal process, which could not be unambiguously discerned from rheological measurements in this blend. The maximal tube stretch that the inner branches experience, before the relaxed outer arm material is incorporated into the tube is determined. The in situ scattering experiments demonstrate for the first time the leveling-off of the strain as the result of branch point withdrawal and chain retraction directly on the molecular level. We conclude that branch point motion in the mixture of architecturally complex polymers occurs earlier than would be expected in a purely branched system, presumably due to the different topological environment that the linear matrix presents to the hierarchically deep-buried tube sections. © 2016 American Chemical Society.

  16. Determining decoupling points in a supply chain networks using NSGA II algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimiarjestan, M.; Wang, G.

    2017-07-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  17. Determining decoupling points in a supply chain networks using NSGA II algorithm

    International Nuclear Information System (INIS)

    Ebrahimiarjestan, M.; Wang, G.

    2017-01-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  18. Point-of-care diagnostics for niche applications.

    Science.gov (United States)

    Cummins, Brian M; Ligler, Frances S; Walker, Glenn M

    2016-01-01

    Point-of-care or point-of-use diagnostics are analytical devices that provide clinically relevant information without the need for a core clinical laboratory. In this review we define point-of-care diagnostics as portable versions of assays performed in a traditional clinical chemistry laboratory. This review discusses five areas relevant to human and animal health where increased attention could produce significant impact: veterinary medicine, space travel, sports medicine, emergency medicine, and operating room efficiency. For each of these areas, clinical need, available commercial products, and ongoing research into new devices are highlighted. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Defining Quality in Cardiovascular Imaging: A Scientific Statement From the American Heart Association.

    Science.gov (United States)

    Shaw, Leslee J; Blankstein, Ron; Jacobs, Jill E; Leipsic, Jonathon A; Kwong, Raymond Y; Taqueti, Viviany R; Beanlands, Rob S B; Mieres, Jennifer H; Flamm, Scott D; Gerber, Thomas C; Spertus, John; Di Carli, Marcelo F

    2017-12-01

    The aims of the current statement are to refine the definition of quality in cardiovascular imaging and to propose novel methodological approaches to inform the demonstration of quality in imaging in future clinical trials and registries. We propose defining quality in cardiovascular imaging using an analytical framework put forth by the Institute of Medicine whereby quality was defined as testing being safe, effective, patient-centered, timely, equitable, and efficient. The implications of each of these components of quality health care are as essential for cardiovascular imaging as they are for other areas within health care. Our proposed statement may serve as the foundation for integrating these quality indicators into establishing designations of quality laboratory practices and developing standards for value-based payment reform for imaging services. We also include recommendations for future clinical research to fulfill quality aims within cardiovascular imaging, including clinical hypotheses of improving patient outcomes, the importance of health status as an end point, and deferred testing options. Future research should evolve to define novel methods optimized for the role of cardiovascular imaging for detecting disease and guiding treatment and to demonstrate the role of cardiovascular imaging in facilitating healthcare quality. © 2017 American Heart Association, Inc.

  20. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  1. Multiple Positive Solutions of a Nonlinear Four-Point Singular Boundary Value Problem with a p-Laplacian Operator on Time Scales

    Directory of Open Access Journals (Sweden)

    Shihuang Hong

    2009-01-01

    Full Text Available We present sufficient conditions for the existence of at least twin or triple positive solutions of a nonlinear four-point singular boundary value problem with a p-Laplacian dynamic equation on a time scale. Our results are obtained via some new multiple fixed point theorems.

  2. One-dimensional gravity in infinite point distributions

    Science.gov (United States)

    Gabrielli, A.; Joyce, M.; Sicard, F.

    2009-10-01

    The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by “Jeans swindle” for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from “shuffled lattice” initial conditions. These show qualitative properties of the evolution (notably its “self-similarity”) like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.

  3. Interevent Time Distribution of Renewal Point Process, Case Study: Extreme Rainfall in South Sulawesi

    Science.gov (United States)

    Sunusi, Nurtiti

    2018-03-01

    The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.

  4. Stabilization at almost arbitrary points for chaotic systems

    International Nuclear Information System (INIS)

    Huang, C.-S.; Lian, K.-Y.; Su, C.-H.; Wu, J.-W.

    2008-01-01

    We consider how to design a feasible control input for chaotic systems via a suitable input channel to achieve the stabilization at arbitrary points. Regarding the nonlinear systems without naturally defined input vectors, we propose a local stabilization controller which works for almost arbitrary points. Subsequently, according to topologically transitive property for chaotic systems, the feedback control force is activated only when the trajectory passes through the neighboring region of the regulated point. Hence the global stabilization is achieved whereas the control effort of the hybrid controller is extremely low

  5. Cosmological time in (2+1)-gravity

    International Nuclear Information System (INIS)

    Benedetti, Riccardo; Guadagnini, Enore

    2001-01-01

    We consider maximal globally hyperbolic flat (2+1)-spacetimes with compact space S of genus g>1. For any spacetime M of this type, the length of time that the events have been in existence is M defines a global time, called the cosmological time CT of M, which reveals deep intrinsic properties of spacetime. In particular, the past/future asymptotic states of the cosmological time recover and decouple the linear and the translational parts of the ISO(2,1)-valued holonomy of the flat spacetime. The initial singularity can be interpreted as an isometric action of the fundamental group of S on a suitable real tree. The initial singularity faithfully manifests itself as a lack of smoothness of the embedding of the CT level surfaces into the spacetime M. The cosmological time determines a real analytic curve in the Teichmueller space of Riemann surfaces of genus g, which connects an interior point (associated to the linear part of the holonomy) with a point on Thurston's natural boundary (associated to the initial singularity)

  6. Cosmological time in /(2+1)-gravity

    Science.gov (United States)

    Benedetti, Riccardo; Guadagnini, Enore

    2001-10-01

    We consider maximal globally hyperbolic flat (2+1)-spacetimes with compact space S of genus g>1. For any spacetime M of this type, the length of time that the events have been in existence is M defines a global time, called the cosmological time CT of M, which reveals deep intrinsic properties of spacetime. In particular, the past/future asymptotic states of the cosmological time recover and decouple the linear and the translational parts of the ISO(2,1)-valued holonomy of the flat spacetime. The initial singularity can be interpreted as an isometric action of the fundamental group of S on a suitable real tree. The initial singularity faithfully manifests itself as a lack of smoothness of the embedding of the CT level surfaces into the spacetime M. The cosmological time determines a real analytic curve in the Teichmüller space of Riemann surfaces of genus g, which connects an interior point (associated to the linear part of the holonomy) with a point on Thurston's natural boundary (associated to the initial singularity).

  7. Large Prospective Study of Ovarian Cancer Screening in High-risk Women: CA125 Cut-point Defined by Menopausal Status

    Science.gov (United States)

    Skates, Steven J.; Mai, Phuong; Horick, Nora K.; Piedmonte, Marion; Drescher, Charles W.; Isaacs, Claudine; Armstrong, Deborah K.; Buys, Saundra S.; Rodriguez, Gustavo C.; Horowitz, Ira R.; Berchuck, Andrew; Daly, Mary B.; Domchek, Susan; Cohn, David E.; Van Le, Linda; Schorge, John O.; Newland, William; Davidson, Susan A.; Barnes, Mack; Brewster, Wendy; Azodi, Masoud; Nerenstone, Stacy; Kauff, Noah D.; Fabian, Carol J.; Sluss, Patrick M.; Nayfield, Susan G.; Kasten, Carol H.; Finkelstein, Dianne M.; Greene, Mark H.; Lu, Karen

    2011-01-01

    Background Previous screening trials for early detection of ovarian cancer in postmenopausal women have used the standard CA125 cut-point of 35 U/mL, the 98th percentile in this population yielding a 2% false positive rate, while the same cut-point in trials of premenopausal women results in substantially higher false positive rates. We investigated demographic and clinical factors predicting CA125 distributions, including 98th percentiles, in a large population of high-risk women participating in two ovarian cancer screening studies with common eligibility criteria and screening protocols. Methods Baseline CA125 values and clinical and demographic data from 3,692 women participating in screening studies conducted by the NCI-sponsored Cancer Genetics Network and Gynecologic Oncology Group were combined for this pre-planned analysis. Due to the large effect of menopausal status on CA125 levels, statistical analyses were conducted separately in pre- and postmenopausal subjects to determine the impact of other baseline factors on predicted CA125 cut-points based on the 98th percentile. Results The primary clinical factor affecting CA125 cut-points was menopausal status, with premenopausal women having a significantly higher cut-point of 50 U/mL while in postmenopausal subjects the standard cut-point of 35 U/mL was recapitulated. In premenopausal women, current oral contraceptive (OC) users had a cut-point of 40 U/mL. Conclusions To achieve a 2% false positive rate in ovarian cancer screening trials and in high-risk women choosing to be screened, the cut-point for initial CA125 testing should be personalized primarily for menopausal status (~ 50 for premenopausal women, 40 for premenopausal on OC, 35 for postmenopausal women). PMID:21893500

  8. We live in the quantum 4-dimensional Minkowski space-time

    OpenAIRE

    Hwang, W-Y. Pauchy

    2015-01-01

    We try to define "our world" by stating that "we live in the quantum 4-dimensional Minkowski space-time with the force-fields gauge group $SU_c(3) \\times SU_L(2) \\times U(1) \\times SU_f(3)$ built-in from the outset". We begin by explaining what "space" and "time" are meaning for us - the 4-dimensional Minkowski space-time, then proceeding to the quantum 4-dimensional Minkowski space-time. In our world, there are fields, or, point-like particles. Particle physics is described by the so-called ...

  9. MDCT evaluation of aortic root and aortic valve prior to TAVI. What is the optimal imaging time point in the cardiac cycle?

    Energy Technology Data Exchange (ETDEWEB)

    Jurencak, Tomas; Turek, Jakub; Nijssen, Estelle C. [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Kietselaer, Bastiaan L.J.H. [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Maastricht University Medical Center, CARIM School for Cardiovascular Diseases, Maastricht (Netherlands); Maastricht University Medical Center, Department of Cardiology, Maastricht (Netherlands); Mihl, Casper; Kok, Madeleine; Wildberger, Joachim E.; Das, Marco [Maastricht University Medical Center, Department of Radiology, P. Debyelaan 25, P.O. Box 5800, AZ, Maastricht (Netherlands); Maastricht University Medical Center, CARIM School for Cardiovascular Diseases, Maastricht (Netherlands); Ommen, Vincent G.V.A. van [Maastricht University Medical Center, Department of Cardiology, Maastricht (Netherlands); Garsse, Leen A.F.M. van [Maastricht University Medical Center, Department of Cardiothoracic Surgery, Maastricht (Netherlands)

    2015-07-15

    To determine the optimal imaging time point for transcatheter aortic valve implantation (TAVI) therapy planning by comprehensive evaluation of the aortic root. Multidetector-row CT (MDCT) examination with retrospective ECG gating was retrospectively performed in 64 consecutive patients referred for pre-TAVI assessment. Eighteen different parameters of the aortic root were evaluated at 11 different time points in the cardiac cycle. Time points at which maximal (or minimal) sizes were determined, and dimension differences to other time points were evaluated. Theoretical prosthesis sizing based on different measurements was compared. Largest dimensions were found between 10 and 20 % of the cardiac cycle for annular short diameter (10 %); mean diameter (10 %); effective diameter and circumference-derived diameter (20 %); distance from the annulus to right coronary artery ostium (10 %); aortic root at the left coronary artery level (20 %); aortic root at the widest portion of coronary sinuses (20 %); and right leaflet length (20 %). Prosthesis size selection differed depending on the chosen measurements in 25-75 % of cases. Significant changes in anatomical structures of the aortic root during the cardiac cycle are crucial for TAVI planning. Imaging in systole is mandatory to obtain maximal dimensions. (orig.)

  10. Change Points in the Population Trends of Aerial-Insectivorous Birds in North America: Synchronized in Time across Species and Regions.

    Directory of Open Access Journals (Sweden)

    Adam C Smith

    Full Text Available North American populations of aerial insectivorous birds are in steep decline. Aerial insectivores (AI are a group of bird species that feed almost exclusively on insects in flight, and include swallows, swifts, nightjars, and flycatchers. The causes of the declines are not well understood. Indeed, it is not clear when the declines began, or whether the declines are shared across all species in the group (e.g., caused by changes in flying insect populations or specific to each species (e.g., caused by changes in species' breeding habitat. A recent study suggested that population trends of aerial insectivores changed for the worse in the 1980s. If there was such a change point in trends of the group, understanding its timing and geographic pattern could help identify potential causes of the decline. We used a hierarchical Bayesian, penalized regression spline, change point model to estimate group-level change points in the trends of 22 species of AI, across 153 geographic strata of North America. We found evidence for group-level change points in 85% of the strata. Change points for flycatchers (FC were distinct from those for swallows, swifts and nightjars (SSN across North America, except in the Northeast, where all AI shared the same group-level change points. During the 1980s, there was a negative change point across most of North America, in the trends of SSN. For FC, the group-level change points were more geographically variable, and in many regions there were two: a positive change point followed by a negative change point. This group-level synchrony in AI population trends is likely evidence of a response to a common environmental factor(s with similar effects on many species across broad spatial extents. The timing and geographic patterns of the change points that we identify here should provide a spring-board for research into the causes behind aerial insectivore declines.

  11. Guidelines for time-to-event end point definitions in sarcomas and gastrointestinal stromal tumors (GIST) trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Bellera, C A; Penel, N; Ouali, M; Bonvalot, S; Casali, P G; Nielsen, O S; Delannes, M; Litière, S; Bonnetain, F; Dabakuyo, T S; Benjamin, R S; Blay, J-Y; Bui, B N; Collin, F; Delaney, T F; Duffaud, F; Filleron, T; Fiore, M; Gelderblom, H; George, S; Grimer, R; Grosclaude, P; Gronchi, A; Haas, R; Hohenberger, P; Issels, R; Italiano, A; Jooste, V; Krarup-Hansen, A; Le Péchoux, C; Mussi, C; Oberlin, O; Patel, S; Piperno-Neumann, S; Raut, C; Ray-Coquard, I; Rutkowski, P; Schuetze, S; Sleijfer, S; Stoeckle, E; Van Glabbeke, M; Woll, P; Gourgou-Bourgade, S; Mathoulin-Pélissier, S

    2015-05-01

    The use of potential surrogate end points for overall survival, such as disease-free survival (DFS) or time-to-treatment failure (TTF) is increasingly common in randomized controlled trials (RCTs) in cancer. However, the definition of time-to-event (TTE) end points is rarely precise and lacks uniformity across trials. End point definition can impact trial results by affecting estimation of treatment effect and statistical power. The DATECAN initiative (Definition for the Assessment of Time-to-event End points in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for RCT in sarcomas and gastrointestinal stromal tumors (GIST). We first carried out a literature review to identify TTE end points (primary or secondary) reported in publications of RCT. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points. Recommendations were developed through a validated consensus method formalizing the degree of agreement among experts. Recommended guidelines for the definition of TTE end points commonly used in RCT for sarcomas and GIST are provided for adjuvant and metastatic settings, including DFS, TTF, time to progression and others. Use of standardized definitions should facilitate comparison of trials' results, and improve the quality of trial design and reporting. These guidelines could be of particular interest to research scientists involved in the design, conduct, reporting or assessment of RCT such as investigators, statisticians, reviewers, editors or regulatory authorities. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  12. Matching fields and lattice points of simplices

    OpenAIRE

    Loho, Georg; Smith, Ben

    2018-01-01

    We show that the Chow covectors of a linkage matching field define a bijection of lattice points and we demonstrate how one can recover the linkage matching field from this bijection. This resolves two open questions from Sturmfels & Zelevinsky (1993) on linkage matching fields. For this, we give an explicit construction that associates a bipartite incidence graph of an ordered partition of a common set to all lattice points in a dilated simplex. Given a triangulation of a product of two simp...

  13. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  14. Dual-time-point O-(2-[18F]fluoroethyl)-L-tyrosine PET for grading of cerebral gliomas

    International Nuclear Information System (INIS)

    Lohmann, Philipp; Herzog, Hans; Rota Kops, Elena; Stoffels, Gabriele; Judov, Natalie; Filss, Christian; Tellmann, Lutz; Galldiks, Norbert; Weiss, Carolin; Sabel, Michael; Coenen, Heinz Hubert; Shah, Nadim Jon; Langen, Karl-Josef

    2015-01-01

    We aimed to evaluate the diagnostic potential of dual-time-point imaging with positron emission tomography (PET) using O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) for non-invasive grading of cerebral gliomas compared with a dynamic approach. Thirty-six patients with histologically confirmed cerebral gliomas (21 primary, 15 recurrent; 24 high-grade, 12 low-grade) underwent dynamic PET from 0 to 50 min post-injection (p.i.) of 18 F-FET, and additionally from 70 to 90 min p.i. Mean tumour-to-brain ratios (TBR mean ) of 18 F-FET uptake were determined in early (20-40 min p.i.) and late (70-90 min p.i.) examinations. Time-activity curves (TAC) of the tumours from 0 to 50 min after injection were assigned to different patterns. The diagnostic accuracy of changes of 18 F-FET uptake between early and late examinations for tumour grading was compared to that of curve pattern analysis from 0 to 50 min p.i. of 18 F-FET. The diagnostic accuracy of changes of the TBR mean of 18 F-FET PET uptake between early and late examinations for the identification of HGG was 81 % (sensitivity 83 %; specificity 75 %; cutoff - 8 %; p < 0.001), and 83 % for curve pattern analysis (sensitivity 88 %; specificity 75 %; p < 0.001). Dual-time-point imaging of 18 F-FET uptake in gliomas achieves diagnostic accuracy for tumour grading that is similar to the more time-consuming dynamic data acquisition protocol. (orig.)

  15. Defining Accelerometer Nonwear Time to Maximize Detection of Sedentary Time in Youth

    DEFF Research Database (Denmark)

    Cain, Kelli L; Bonilla, Edith; Conway, Terry L

    2018-01-01

    PURPOSE: The present study examined various accelerometer nonwear definitions and their impact on detection of sedentary time using different ActiGraph models, filters, and axes. METHODS: In total, 61 youth (34 children and 27 adolescents; aged 5-17 y) wore a 7164 and GT3X+ ActiGraph on a hip......), and GT3X+N (V and VM), and sedentary estimates were computed. RESULTS: The GT3X+LFE-VM was most sensitive to movement and could accurately detect observed sedentary time with the shortest nonwear definition of 20 minutes of consecutive "0" counts for children and 40 minutes for adolescents. The GT3X......+N-V was least sensitive to movement and required longer definitions to detect observed sedentary time (40 min for children and 90 min for adolescents). VM definitions were 10 minutes shorter than V definitions. LFE definitions were 40 minutes shorter than N definitions in adolescents. CONCLUSION: Different...

  16. Defining the Anthropocene

    Science.gov (United States)

    Lewis, Simon; Maslin, Mark

    2016-04-01

    Time is divided by geologists according to marked shifts in Earth's state. Recent global environmental changes suggest that Earth may have entered a new human-dominated geological epoch, the Anthropocene. Should the Anthropocene - the idea that human activity is a force acting upon the Earth system in ways that mean that Earth will be altered for millions of years - be defined as a geological time-unit at the level of an Epoch? Here we appraise the data to assess such claims, first in terms of changes to the Earth system, with particular focus on very long-lived impacts, as Epochs typically last millions of years. Can Earth really be said to be in transition from one state to another? Secondly, we then consider the formal criteria used to define geological time-units and move forward through time examining whether currently available evidence passes typical geological time-unit evidence thresholds. We suggest two time periods likely fit the criteria (1) the aftermath of the interlinking of the Old and New Worlds, which moved species across continents and ocean basins worldwide, a geologically unprecedented and permanent change, which is also the globally synchronous coolest part of the Little Ice Age (in Earth system terms), and the beginning of global trade and a new socio-economic "world system" (in historical terms), marked as a golden spike by a temporary drop in atmospheric CO2, centred on 1610 CE; and (2) the aftermath of the Second World War, when many global environmental changes accelerated and novel long-lived materials were increasingly manufactured, known as the Great Acceleration (in Earth system terms) and the beginning of the Cold War (in historical terms), marked as a golden spike by the peak in radionuclide fallout in 1964. We finish by noting that the Anthropocene debate is politically loaded, thus transparency in the presentation of evidence is essential if a formal definition of the Anthropocene is to avoid becoming a debate about bias. The

  17. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  18. Viviani Polytopes and Fermat Points

    Science.gov (United States)

    Zhou, Li

    2012-01-01

    Given a set of oriented hyperplanes P = {p1, . . . , pk} in R[superscript n], define v : R[superscript n] [right arrow] R by v(X) = the sum of the signed distances from X to p[subscript 1], . . . , p[subscript k], for any point X [is a member of] R[superscript n]. We give a simple geometric characterization of P for which v is constant, leading to…

  19. Time-lapse misorientation maps for the analysis of electron backscatter diffraction data from evolving microstructures

    NARCIS (Netherlands)

    Wheeler, J.; Cross, A.; Drury, M.; Hough, R.M.; Mariani, E.; Piazolo, S.; Prior, D.J.

    2011-01-01

    A “time-lapse misorientation map” is defined here as a map which shows the orientation change at each point in an evolving crystalline microstructure between two different times. Electron backscatter diffraction data from in situ heating experiments can be used to produce such maps, which then

  20. Time-dependent Taylor–Aris dispersion of an initial point concentration

    DEFF Research Database (Denmark)

    Vedel, Søren; Hovad, Emil; Bruus, Henrik

    2014-01-01

    -specific theoretical results, and furthermore predict new phenomena. In particular, for the transient phase before the well-described steady Taylor–Aris limit is reached, we find anomalous diffusion with a dependence of the temporal scaling exponent on the initial release point, generalizing this finding in specific...... cases. During this transient we furthermore identify maxima in the values of the dispersion coefficient which exceed the Taylor–Aris value by amounts that depend on channel geometry, initial point release position, velocity profile and Péclet number. We show that these effects are caused by a difference...

  1. Critical point inequalities and scaling limits

    International Nuclear Information System (INIS)

    Newman, C.M.

    1979-01-01

    A refined and extended version of the Buckingham-Gunton inequality relating various pairs of critical exponents is shown to be valid for a large class of statistical mechanical models. If this inequality is an equality (in the refined sense) and one of the critical exponents has a non-Gaussian value, then any scaling limit must be non-Gaussian. This result clarifies the relationships between the nontriviality of triviality of the scaling limit for ordinary critical points in four dimensions (or tricritical points in three dimensions) and the existence of logarithmic factors in the asymptotics which define the two critical exponents. (orig.) [de

  2. Evaluation of methods for characterizing the melting curves of a high temperature cobalt-carbon fixed point to define and determine its melting temperature

    Science.gov (United States)

    Lowe, David; Machin, Graham

    2012-06-01

    The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.

  3. 26 CFR 1.1250-2 - Additional depreciation defined.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Additional depreciation defined. 1.1250-2... Additional depreciation defined. (a) In general—(1) Definition for purposes of section 1250(b)(1). Except as... depreciation means: (i) In the case of property which at the time of disposition has a holding period under...

  4. Identification of the key parameters defining the life of graphite core components

    International Nuclear Information System (INIS)

    Mitchell, M.N.

    2005-01-01

    The Core Structures of a Pebble Bed rector core comprise graphite reflectors constructed from blocks. These blocks are subject to high flux and temperatures as well as significant gradients in flux and temperature. This loading combined with the behaviour of graphite under irradiation gives rise to complex stress states within the reflector blocks. At some point, the stress state will reach a critical level and cracks will initiate within the blocks. The point of crack initiation is a useful point to define as the end of the part's life. The life of these graphite reflector parts in a pebble bed reactor (PBR) core determines the service life of the Core Structures. The replacement of the Core Structures' components will be a costly and time consuming. It is important that the components of the Core Structures be designed for the best life possible. As part of the conceptual design of the Pebble Bed Modular Reactor (PBMR), the assessment of the life of these components was examined. To facilitate the understanding of the parameters that influence the design life of the PBMR, a study has been completed into the effect of various design parameters on the design life of a typical side reflector block. Parameters investigated include: block geometry, material property variations, and load variations. The results of this study are to be presented. (author)

  5. ANALISA BREAK EVENT POINT (BEP TERHADAP LABA PERUSAHAAN

    Directory of Open Access Journals (Sweden)

    Muhammad Yusuf

    2015-09-01

    Full Text Available Break event point or the break-even point can be defined as a situation where the operating company does not make a profit and not a loss. The goal is to provide the knowledge to increase knowledge about the break event point (the point of principal and its relationship with the company profit and to know how the results of the. Analysis break event point is very important for the leadership of the company to determine the production rate how much the cost will be equal to the amount of sales or in other words to determine the break event point we will determine the relationship between sales, production, selling price, cost, loss or profit, making it easier for leaders to take discretion.DOI: 10.15408/ess.v4i1.1955 

  6. Time-resolved spectroscopy defines perturbation in molecules

    International Nuclear Information System (INIS)

    Ahmed, K.

    1998-01-01

    Time-resolved LIF spectroscopy is employed in order to investigate perturbations in different excited electronic state of alkali molecules. Dunham Coefficients are used to search the selected excited ro-vibrational level, which is overlap with the other nearby excited states. Lifetime measurement has been performed of more than 50 ro-vibrational levels. Out of these 25 levels were observed drastically different lifetimes from the other unperturbed levels. In this report, influence of different perturbations on this anomalous behavior is investigated and discussed. (author)

  7. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  8. Geometrical prediction of maximum power point for photovoltaics

    International Nuclear Information System (INIS)

    Kumar, Gaurav; Panchal, Ashish K.

    2014-01-01

    Highlights: • Direct MPP finding by parallelogram constructed from geometry of I–V curve of cell. • Exact values of V and P at MPP obtained by Lagrangian interpolation exploration. • Extensive use of Lagrangian interpolation for implementation of proposed method. • Method programming on C platform with minimum computational burden. - Abstract: It is important to drive solar photovoltaic (PV) system to its utmost capacity using maximum power point (MPP) tracking algorithms. This paper presents a direct MPP prediction method for a PV system considering the geometry of the I–V characteristic of a solar cell and a module. In the first step, known as parallelogram exploration (PGE), the MPP is determined from a parallelogram constructed using the open circuit (OC) and the short circuit (SC) points of the I–V characteristic and Lagrangian interpolation. In the second step, accurate values of voltage and power at the MPP, defined as V mp and P mp respectively, are decided by the Lagrangian interpolation formula, known as the Lagrangian interpolation exploration (LIE). Specifically, this method works with a few (V, I) data points instead most of the MPP algorithms work with (P, V) data points. The performance of the method is examined by several PV technologies including silicon, copper indium gallium selenide (CIGS), copper zinc tin sulphide selenide (CZTSSe), organic, dye sensitized solar cell (DSSC) and organic tandem cells’ data previously reported in literatures. The effectiveness of the method is tested experimentally for a few silicon cells’ I–V characteristics considering variation in the light intensity and the temperature. At last, the method is also employed for a 10 W silicon module tested in the field. To testify the preciseness of the method, an absolute value of the derivative of power (P) with respect to voltage (V) defined as (dP/dV) is evaluated and plotted against V. The method estimates the MPP parameters with high accuracy for any

  9. Guidelines for time-to-event end point definitions in breast cancer trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Gourgou-Bourgade, S; Cameron, D; Poortmans, P; Asselain, B; Azria, D; Cardoso, F; A'Hern, R; Bliss, J; Bogaerts, J; Bonnefoi, H; Brain, E; Cardoso, M J; Chibaudel, B; Coleman, R; Cufer, T; Dal Lago, L; Dalenc, F; De Azambuja, E; Debled, M; Delaloge, S; Filleron, T; Gligorov, J; Gutowski, M; Jacot, W; Kirkove, C; MacGrogan, G; Michiels, S; Negreiros, I; Offersen, B V; Penault Llorca, F; Pruneri, G; Roche, H; Russell, N S; Schmitt, F; Servent, V; Thürlimann, B; Untch, M; van der Hage, J A; van Tienhoven, G; Wildiers, H; Yarnold, J; Bonnetain, F; Mathoulin-Pélissier, S; Bellera, C; Dabakuyo-Yonli, T S

    2015-05-01

    Using surrogate end points for overall survival, such as disease-free survival, is increasingly common in randomized controlled trials. However, the definitions of several of these time-to-event (TTE) end points are imprecisely which limits interpretation and cross-trial comparisons. The estimation of treatment effects may be directly affected by the definitions of end points. The DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for randomized cancer clinical trials (RCTs) in breast cancer. A literature review was carried out to identify TTE end points (primary or secondary) reported in publications of randomized trials or guidelines. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points based on a validated consensus method that formalize the degree of agreement among experts. Recommended guidelines for the definitions of TTE end points commonly used in RCTs for breast cancer are provided for non-metastatic and metastatic settings. The use of standardized definitions should facilitate comparisons of trial results and improve the quality of trial design and reporting. These guidelines could be of particular interest to those involved in the design, conducting, reporting, or assessment of RCT. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  10. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    Science.gov (United States)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical

  11. Methodology to Define Delivery Accuracy Under Current Day ATC Operations

    Science.gov (United States)

    Sharma, Shivanjli; Robinson, John E., III

    2015-01-01

    In order to enable arrival management concepts and solutions in a NextGen environment, ground- based sequencing and scheduling functions have been developed to support metering operations in the National Airspace System. These sequencing and scheduling algorithms as well as tools are designed to aid air traffic controllers in developing an overall arrival strategy. The ground systems being developed will support the management of aircraft to their Scheduled Times of Arrival (STAs) at flow-constrained meter points. This paper presents a methodology for determining the undelayed delivery accuracy for current day air traffic control operations. This new method analyzes the undelayed delivery accuracy at meter points in order to understand changes of desired flow rates as well as enabling definition of metrics that will allow near-future ground automation tools to successfully achieve desired separation at the meter points. This enables aircraft to meet their STAs while performing high precision arrivals. The research presents a possible implementation that would allow delivery performance of current tools to be estimated and delivery accuracy requirements for future tools to be defined, which allows analysis of Estimated Time of Arrival (ETA) accuracy for Time-Based Flow Management (TBFM) and the FAA's Traffic Management Advisor (TMA). TMA is a deployed system that generates scheduled time-of-arrival constraints for en- route air traffic controllers in the US. This new method of automated analysis provides a repeatable evaluation of the delay metrics for current day traffic, new releases of TMA, implementation of different tools, and across different airspace environments. This method utilizes a wide set of data from the Operational TMA-TBFM Repository (OTTR) system, which processes raw data collected by the FAA from operational TMA systems at all ARTCCs in the nation. The OTTR system generates daily reports concerning ATC status, intent and actions. Due to its

  12. Zero-point field in curved spaces

    International Nuclear Information System (INIS)

    Hacyan, S.; Sarmiento, A.; Cocho, G.; Soto, F.

    1985-01-01

    Boyer's conjecture that the thermal effects of acceleration are manifestations of the zero-point field is further investigated within the context of quantum field theory in curved spaces. The energy-momentum current for a spinless field is defined rigorously and used as the basis for investigating the energy density observed in a noninertial frame. The following examples are considered: (i) uniformly accelerated observers, (ii) two-dimensional Schwarzschild black holes, (iii) the Einstein universe. The energy spectra which have been previously calculated appear in the present formalism as an additional contribution to the energy of the zero-point field, but particle creation does not occur. It is suggested that the radiation produced by gravitational fields or by acceleration is a manifestation of the zero-point field and of the same nature (whether real or virtual)

  13. Determining a young dancer's readiness for dancing on pointe.

    Science.gov (United States)

    Shah, Selina

    2009-01-01

    Ballet is one of the most popular youth activities in the United States. Many ballet students eventually train to dance "en pointe," the French words for "on pointe," or "on the tips of their toes." No research exists to define criteria for determining when a young dancer can transition from dancing in ballet slippers to dancing in pointe shoes. However, dancers can be evaluated for this progression based on a number of factors, including adequate foot and ankle plantarflexion, technique, training, proprioception, alignment, and strength.

  14. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  15. Developing a Tool Point Control Scheme for a Hydraulic Crane Using Interactive Real-time Dynamic Simulation

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Melters; Hansen, Michael Rygaard; Ballebye, Morten

    2010-01-01

    This paper describes the implementation of an interactive real-time dynamic simulation model of a hydraulic crane. The user input to the model is given continuously via joystick and output is presented continuously in a 3D animation. Using this simulation model, a tool point control scheme...... is developed for the specific crane, considering the saturation phenomena of the system and practical implementation....

  16. Real-time multi-GNSS single-frequency precise point positioning

    NARCIS (Netherlands)

    de Bakker, P.F.; Tiberius, C.C.J.M.

    2017-01-01

    Precise Point Positioning (PPP) is a popular Global Positioning System (GPS) processing strategy, thanks to its high precision without requiring additional GPS infrastructure. Single-Frequency PPP (SF-PPP) takes this one step further by no longer relying on expensive dual-frequency GPS receivers,

  17. Reactor coolant flow measurements at Point Lepreau

    International Nuclear Information System (INIS)

    Brenciaglia, G.; Gurevich, Y.; Liu, G.

    1996-01-01

    The CROSSFLOW ultrasonic flow measurement system manufactured by AMAG is fully proven as reliable and accurate when applied to large piping in defined geometries for such applications as feedwater flows measurement. Its application to direct reactor coolant flow (RCF) measurements - both individual channel flows and bulk flows such as pump suction flow - has been well established through recent work by AMAG at Point Lepreau, with application to other reactor types (eg. PWR) imminent. At Point Lepreau, Measurements have been demonstrated at full power; improvements to consistently meet ±1% accuracy are in progress. The development and recent customization of CROSSFLOW to RCF measurement at Point Lepreau are described in this paper; typical measurement results are included. (author)

  18. Cluster Tracking with Time-of-Flight Cameras

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, Mads; Kirschmeyer, Martin

    2008-01-01

    We describe a method for tracking people using a time-of-flight camera and apply the method for persistent authentication in a smart-environment. A background model is built by fusing information from intensity and depth images. While a geometric constraint is employed to improve pixel cluster...... coherence and reducing the influence of noise, the EM algorithm (expectation maximization) is used for tracking moving clusters of pixels significantly different from the background model. Each cluster is defined through a statistical model of points on the ground plane. We show the benefits of the time...

  19. Validation and Assessment of Multi-GNSS Real-Time Precise Point Positioning in Simulated Kinematic Mode Using IGS Real-Time Service

    Directory of Open Access Journals (Sweden)

    Liang Wang

    2018-02-01

    Full Text Available Precise Point Positioning (PPP is a popular technology for precise applications based on the Global Navigation Satellite System (GNSS. Multi-GNSS combined PPP has become a hot topic in recent years with the development of multiple GNSSs. Meanwhile, with the operation of the real-time service (RTS of the International GNSS Service (IGS agency that provides satellite orbit and clock corrections to broadcast ephemeris, it is possible to obtain the real-time precise products of satellite orbits and clocks and to conduct real-time PPP. In this contribution, the real-time multi-GNSS orbit and clock corrections of the CLK93 product are applied for real-time multi-GNSS PPP processing, and its orbit and clock qualities are investigated, first with a seven-day experiment by comparing them with the final multi-GNSS precise product ‘GBM’ from GFZ. Then, an experiment involving real-time PPP processing for three stations in the Multi-GNSS Experiment (MGEX network with a testing period of two weeks is conducted in order to evaluate the convergence performance of real-time PPP in a simulated kinematic mode. The experimental result shows that real-time PPP can achieve a convergence performance of less than 15 min for an accuracy level of 20 cm. Finally, the real-time data streams from 12 globally distributed IGS/MGEX stations for one month are used to assess and validate the positioning accuracy of real-time multi-GNSS PPP. The results show that the simulated kinematic positioning accuracy achieved by real-time PPP on different stations is about 3.0 to 4.0 cm for the horizontal direction and 5.0 to 7.0 cm for the three-dimensional (3D direction.

  20. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  1. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    Science.gov (United States)

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Negative magnetoresistance without well-defined chirality in the Weyl semimetal TaP.

    Science.gov (United States)

    Arnold, Frank; Shekhar, Chandra; Wu, Shu-Chun; Sun, Yan; Dos Reis, Ricardo Donizeth; Kumar, Nitesh; Naumann, Marcel; Ajeesh, Mukkattu O; Schmidt, Marcus; Grushin, Adolfo G; Bardarson, Jens H; Baenitz, Michael; Sokolov, Dmitry; Borrmann, Horst; Nicklas, Michael; Felser, Claudia; Hassinger, Elena; Yan, Binghai

    2016-05-17

    Weyl semimetals (WSMs) are topological quantum states wherein the electronic bands disperse linearly around pairs of nodes with fixed chirality, the Weyl points. In WSMs, nonorthogonal electric and magnetic fields induce an exotic phenomenon known as the chiral anomaly, resulting in an unconventional negative longitudinal magnetoresistance, the chiral-magnetic effect. However, it remains an open question to which extent this effect survives when chirality is not well-defined. Here, we establish the detailed Fermi-surface topology of the recently identified WSM TaP via combined angle-resolved quantum-oscillation spectra and band-structure calculations. The Fermi surface forms banana-shaped electron and hole pockets surrounding pairs of Weyl points. Although this means that chirality is ill-defined in TaP, we observe a large negative longitudinal magnetoresistance. We show that the magnetoresistance can be affected by a magnetic field-induced inhomogeneous current distribution inside the sample.

  3. Dual-time-point FDG-PET/CT Imaging of Temporal Bone Chondroblastoma: A Report of Two Cases

    Directory of Open Access Journals (Sweden)

    Akira Toriihara

    2015-07-01

    Full Text Available Temporal bone chondroblastoma is an extremely rare benign bone tumor. We encountered two cases showing similar imaging findings on computed tomography (CT, magnetic resonance imaging (MRI, and dual-time-point 18F-fluorodeoxyglucose (18F-FDG positron emission tomography (PET/CT. In both cases, CT images revealed temporal bone defects and sclerotic changes around the tumor. Most parts of the tumor showed low signal intensity on T2- weighted MRI images and non-uniform enhancement on gadolinium contrast-enhanced T1-weighted images. No increase in signal intensity was noted in diffusion-weighted images. Dual-time-point PET/CT showed markedly elevated 18F-FDG uptake, which increased from the early to delayed phase. Nevertheless, immunohistochemical analysis of the resected tumor tissue revealed weak expression of glucose transporter-1 and hexokinase II in both tumors. Temporal bone tumors, showing markedly elevated 18F-FDG uptake, which increases from the early to delayed phase on PET/CT images, may be diagnosed as malignant bone tumors. Therefore, the differential diagnosis should include chondroblastoma in combination with its characteristic findings on CT and MRI.

  4. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  5. Shot-noise-weighted processes : a new family of spatial point processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); I.S. Molchanov (Ilya)

    1995-01-01

    textabstractThe paper suggests a new family of of spatial point processes distributions. They are defined by means of densities with respect to the Poisson point process within a bounded set. These densities are given in terms of a functional of the shot-noise process with a given influence

  6. A different outlook on time: visual and auditory month names elicit different mental vantage points for a time-space synaesthete.

    Science.gov (United States)

    Jarick, Michelle; Dixon, Mike J; Stewart, Mark T; Maxwell, Emily C; Smilek, Daniel

    2009-01-01

    Synaesthesia is a fascinating condition whereby individuals report extraordinary experiences when presented with ordinary stimuli. Here we examined an individual (L) who experiences time units (i.e., months of the year and hours of the day) as occupying specific spatial locations (January is 30 degrees to the left of midline). This form of time-space synaesthesia has been recently investigated by Smilek et al. (2007) who demonstrated that synaesthetic time-space associations are highly consistent, occur regardless of intention, and can direct spatial attention. We extended this work by showing that for the synaesthete L, her time-space vantage point changes depending on whether the time units are seen or heard. For example, when L sees the word JANUARY, she reports experiencing January on her left side, however when she hears the word "January" she experiences the month on her right side. L's subjective reports were validated using a spatial cueing paradigm. The names of months were centrally presented followed by targets on the left or right. L was faster at detecting targets in validly cued locations relative to invalidly cued locations both for visually presented cues (January orients attention to the left) and for aurally presented cues (January orients attention to the right). We replicated this difference in visual and aural cueing effects using hour of the day. Our findings support previous research showing that time-space synaesthesia can bias visual spatial attention, and further suggest that for this synaesthete, time-space associations differ depending on whether they are visually or aurally induced.

  7. Real Time Precise Point Positioning: Preliminary Results for the Brazilian Region

    Science.gov (United States)

    Marques, Haroldo; Monico, João.; Hirokazu Shimabukuro, Milton; Aquino, Marcio

    2010-05-01

    GNSS positioning can be carried out in relative or absolute approach. In the last years, more attention has been driven to the real time precise point positioning (PPP). To achieve centimeter accuracy with this method in real time it is necessary to have available the satellites precise coordinates as well as satellites clocks corrections. The coordinates can be used from the predicted IGU ephemeris, but the satellites clocks must be estimated in a real time. It can be made from a GNSS network as can be seen from EUREF Permanent Network. The infra-structure to realize the PPP in real time is being available in Brazil through the Brazilian Continuous Monitoring Network (RBMC) together with the Sao Paulo State GNSS network which are transmitting GNSS data using NTRIP (Networked Transport of RTCM via Internet Protocol) caster. Based on this information it was proposed a PhD thesis in the Univ. Estadual Paulista (UNESP) aiming to investigate and develop the methodology to estimate the satellites clocks and realize PPP in real time. Then, software is being developed to process GNSS data in the real time PPP mode. A preliminary version of the software was called PPP_RT and is able to process GNSS code and phase data using precise ephemeris and satellites clocks. The PPP processing can be accomplished considering the absolute satellite antenna Phase Center Variation (PCV), Ocean Tide Loading (OTL), Earth Body Tide, among others. The first order ionospheric effects can be eliminated or minimized by ion-free combination or parameterized in the receiver-satellite direction using a stochastic process, e.g. random walk or white noise. In the case of ionosphere estimation, a pseudo-observable is introduced in the mathematical model for each satellite and the initial value can be computed from Klobuchar model or from Global Ionospheric Map (GIM). The adjustment is realized in the recursive mode and the DIA (Detection Identification and Adaptation) is used for quality control. In

  8. Computing fixed points of nonexpansive mappings by $\\alpha$-dense curves

    Directory of Open Access Journals (Sweden)

    G. García

    2017-08-01

    Full Text Available Given a multivalued nonexpansive mapping defined on a convex and compact set of a Banach space, with values in the class of convex and compact subsets of its domain, we present an iteration scheme which (under suitable conditions converges to a fixed point of such mapping. This new iteration provides us another method to approximate the fixed points of a singlevalued nonexpansive mapping, defined on a compact and convex set into itself. Moreover, the conditions for the singlevalued case are less restrictive than for the multivalued case. Our main tool will be the so called $\\alpha$-dense curves, which will allow us to construct such iterations. Some numerical examples are provided to illustrate our results.

  9. Defining a BMI Cut-Off Point for the Iranian Population: The Shiraz Heart Study.

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Babai

    Full Text Available In this study we evaluated and redefined the optimum body mass index (BMI cut-off point for the Iranian population based on metabolic syndrome (MeS risk factors. We further evaluated BMI cut-off points with and without waist circumference (WC as a cofactor of risk and compared the differences. This study is part of the largest surveillance programs conducted in Shiraz, Iran, termed the Shiraz Heart study. Our study sample included subjects between the ages of 20 to 65 years old. After excluding pregnant women, those with missing data and those with comorbid disease, a total of 12283 made up the study population. The participants underwent a series of tests and evaluations by trained professionals in accordance with WHO recommendations. Hypertension, abnormal fasting blood sugar (FBS, triglyceride (TG and high density lipoprotein cholesterol (HDL (in the context of the definition of metabolic syndrome were prevalent among 32.4%, 27.6%, 42.1 and 44.2% of our participants, respectively. Women displayed higher rates of overall obesity compared to men (based on the definition by the WHO as higher than 30 kg/m2. Regarding MeS, 38.9% of our population had the all symptoms of MeS which was more prevalent among women (41.5% vs. 36%. When excluding WC in the definition of MeS, results showed that males tend to show a higher rate of metabolic risk factors (19.2% vs. 15.6%. Results of multivariate analysis showed that parallel to an increase in BMI, the odds ratio (OR for acquiring each component of the metabolic syndrome increased (OR = 1.178; CI: 1.166-1.190. By excluding WC, the previous OR decreased (OR = 1.105; CI: 1.093-1.118. Receiver Operating Characteristic (ROC curve analysis showed that the optimum BMI cut-off point for predicting metabolic syndrome was 26.1 kg/m2 and 26.2 kg/m2 [Accuracy (Acc = 69% and 61%, respectively] for males and females, respectively. The overall BMI cut-off for both sexes was 26.2 kg/m2 (Acc = 65% with sensitivity and

  10. Dual time point FDG PET imaging in evaluating pulmonary nodules with low FDG avidity

    International Nuclear Information System (INIS)

    Chen Xiang; Zhao Jinhua; Song Jianhua; Xing Yan; Wang Taisong; Qiao Wenli

    2010-01-01

    A standardized uptake value (SUV) of 2.5 is frequently used as criteria to evaluate pulmonary lesions. However, false results may occur. Some studies have shown the usefulness of delayed PET for improving accuracy, while others recently have shown fewer promising results. This study was designed to investigate the accuracy of dual time point (DTP) FDG PET imaging in the evaluation of pulmonary lesions with an initial SUV less than 2.5. DTP FDG PET studies were conducted about 1 and 2 hours after FDG injection, and pulmonary lesions with an initial SUV less than 2.5 were identified. Nodules with pathologic results or imaging follow up were included. The differences in SUV and retention index (RI) between benign and malignant pulmonary lesions were analyzed. Receiver operating characteristics (ROC) analysis was performed to evaluate the discriminating validity of SUV and RI. 51 lesions were finally included. A RI greater than 0% was observed in 64% of the benign lesions; 56% had a RI greater than 10%. Among the malignancies, 80.8% had a RI greater than 0%, and 61.5% had a RI greater than 10%. We found no significant differences in SUV and RI between benign and malignant lesions. The area under the ROC curve did not differ from 0.5 whether using SUV or the retention index. Utilizing a SUV increase of 10%, the sensitivity was 61.5%, specificity 44% and accuracy was 52.9%. Dual time point FDG PET may not be of benefit in the evaluation of pulmonary nodules with low FDG avidity. (authors)

  11. Metric space construction for the boundary of space-time

    International Nuclear Information System (INIS)

    Meyer, D.A.

    1986-01-01

    A distance function between points in space-time is defined and used to consider the manifold as a topological metric space. The properties of the distance function are investigated: conditions under which the metric and manifold topologies agree, the relationship with the causal structure of the space-time and with the maximum lifetime function of Wald and Yip, and in terms of the space of causal curves. The space-time is then completed as a topological metric space; the resultant boundary is compared with the causal boundary and is also calculated for some pertinent examples

  12. Surface Charging and Points of Zero Charge

    CERN Document Server

    Kosmulski, Marek

    2009-01-01

    Presents Points of Zero Charge data on well-defined specimen of materials sorted by trademark, manufacturer, and location. This text emphasizes the comparison between particular results obtained for different portions of the same or very similar material and synthesizes the information published in research reports over the past few decades

  13. Past Negative Time Perspective as a Predictor of Grade Point Average in Occupational Therapy Doctoral Students

    Directory of Open Access Journals (Sweden)

    Pat J. Precin

    2017-05-01

    Full Text Available Time perspective is a fundamental dimension in psychological time, dividing human experiences into past, present, and future. Time perspective influences individuals’ functioning in all occupations, including education. Previous research has examined the relationship between time perspective and academic outcomes, but the same research has not been done, to date, with occupational therapy doctoral students. This quantitative, cross-sectional study investigated the relationship between time perspective and academic success in occupational therapy doctoral students across the United States. Data from the Zimbardo Time Perspective Inventory (ZTPI and grade point averages (GPAs were collected from 50 participants via surveymonkey.com. Past Negative time perspective statistically predicted GPA in the negative direction (p = .001 for students in pre-professional OTD programs, but did not predict GPA for post-professional students. Age, gender, and learning environment did not significantly influence the prediction of GPA in either group. The method and results of this study demonstrate that the ZTPI, an instrument used in the field of psychology, may have value in the profession of occupational therapy and occupational therapy doctoral programs.

  14. Word Length Selection Method for Controller Implementation on FPGAs Using the VHDL-2008 Fixed-Point and Floating-Point Packages

    Directory of Open Access Journals (Sweden)

    Urriza I

    2010-01-01

    Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.

  15. AUTOMATIC REGISTRATION OF TERRESTRIAL LASER SCANNER POINT CLOUDS USING NATURAL PLANAR SURFACES

    Directory of Open Access Journals (Sweden)

    P. W. Theiler

    2012-07-01

    Full Text Available Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  16. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    Science.gov (United States)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  17. Near-real-time regional troposphere models for the GNSS precise point positioning technique

    International Nuclear Information System (INIS)

    Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K

    2013-01-01

    The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)

  18. Estimated GFR Decline as a Surrogate End Point for Kidney Failure

    DEFF Research Database (Denmark)

    Lambers Heerspink, Hiddo J; Weldegiorgis, Misghina; Inker, Lesley A

    2014-01-01

    A doubling of serum creatinine value, corresponding to a 57% decline in estimated glomerular filtration rate (eGFR), is used frequently as a component of a composite kidney end point in clinical trials in type 2 diabetes. The aim of this study was to determine whether alternative end points defin...

  19. Defining Marriage: Classification, Interpretation, and Definitional Disputes

    Directory of Open Access Journals (Sweden)

    Fabrizio Macagno

    2016-09-01

    Full Text Available The classification of a state of affairs under a legal category can be considered as a kind of con- densed decision that can be made explicit, analyzed, and assessed us- ing argumentation schemes. In this paper, the controversial conflict of opinions concerning the nature of “marriage” in Obergefell v. Hodges is analyzed pointing out the dialecti- cal strategies used for addressing the interpretive doubts. The dispute about the same-sex couples’ right to marry hides a much deeper disa- greement not only about what mar- riage is, but more importantly about the dialectical rules for defining it.

  20. Generalized Attractor Points in Gauged Supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC; Kallosh, Renata; /Stanford U., Phys. Dept.; Shmakova, Marina; /KIPAC, Menlo Park /SLAC /Stanford U., Phys. Dept.

    2011-08-15

    The attractor mechanism governs the near-horizon geometry of extremal black holes in ungauged 4D N=2 supergravity theories and in Calabi-Yau compactifications of string theory. In this paper, we study a natural generalization of this mechanism to solutions of arbitrary 4D N=2 gauged supergravities. We define generalized attractor points as solutions of an ansatz which reduces the Einstein, gauge field, and scalar equations of motion to algebraic equations. The simplest generalized attractor geometries are characterized by non-vanishing constant anholonomy coefficients in an orthonormal frame. Basic examples include Lifshitz and Schroedinger solutions, as well as AdS and dS vacua. There is a generalized attractor potential whose critical points are the attractor points, and its extremization explains the algebraic nature of the equations governing both supersymmetric and non-supersymmetric attractors.

  1. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  2. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  3. Indexing Moving Points

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2003-01-01

    We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present an...

  4. Research Network of Tehran Defined Population: Methodology and Establishment

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2015-12-01

    Full Text Available Background: We need a defined population for determining prevalence and incidence of diseases, as well as conducting interventional, cohort and longitudinal studies, calculating correct and timely public health indicators, assessing actual health needs of community, performing educational programs and interventions to promote healthy lifestyle, and enhancing quality of primary health services.The objective of this project was to determine a defined population which is representative of Tehran, the Capital of Iran. This article reports the methodology and establishment of the research network of Tehran defined population.Methods: This project started by selecting two urban health centers from each of the five district health centers affiliated to Shahid Beheshti University of Medical Sciences in 2012. Inside each selected urban health center, one defined population research station was established. Two new centers have been added during 2013 and 2014. For the time being, the number of the covered population of the network has reached 40000 individuals. The most important criterion for the defined population has been to be representative of the population of Tehran. For this, we selected two urban health centers from 12 of 22 municipality districts and from each of the five different socioeconomic of Greater Tehran. Merely 80000 individuals in neighborhoods of each defined population research station were considered as control group of the project.Findings: Totally we selected 12 defined population research stations and their under-covered population developed a defined population which is representative of Tehran population.Conclusion: a population lab is ready now in metropolitan of Tehran.

  5. A New Virtual Point Detector Concept for a HPGe detector

    International Nuclear Information System (INIS)

    Byun, Jong In; Yun, Ju Yong

    2009-01-01

    For last several decades, the radiation measurement and radioactivity analysis techniques using gamma detectors have been well established. Especially , the study about the detection efficiency has been done as an important part of gamma spectrometry. The detection efficiency depends strongly on source-to-detector distance. The detection efficiency with source-to-detector distance can be expressed by a complex function of geometry and physical characteristics of gamma detectors. In order to simplify the relation, a virtual point detector concept was introduced by Notea. Recently, further studies concerning the virtual point detector have been performed. In previous other works the virtual point detector has been considered as a fictitious point existing behind the detector end cap. However the virtual point detector position for the front and side of voluminous detectors might be different due to different effective central axis of them. In order to more accurately define the relation, therefore, we should consider the virtual point detector for the front as well as side and off-center of the detector. The aim of this study is to accurately define the relation between the detection efficiency and source-to-detector distance with the virtual point detector. This paper demonstrates the method to situate the virtual point detectors for a HPGe detector. The new virtual point detector concept was introduced for three area of the detector and its characteristics also were demonstrated by using Monte Carlo Simulation method. We found that the detector has three virtual point detectors except for its rear area. This shows that we should consider the virtual point detectors for each area when applying the concept to radiation measurement. This concept can be applied to the accurate geometric simplification for the detector and radioactive sources.

  6. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  7. Malware Propagation and Prevention Model for Time-Varying Community Networks within Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Lan Liu

    2017-01-01

    Full Text Available As the adoption of Software Defined Networks (SDNs grows, the security of SDN still has several unaddressed limitations. A key network security research area is in the study of malware propagation across the SDN-enabled networks. To analyze the spreading processes of network malware (e.g., viruses in SDN, we propose a dynamic model with a time-varying community network, inspired by research models on the spread of epidemics in complex networks across communities. We assume subnets of the network as communities and links that are dense in subnets but sparse between subnets. Using numerical simulation and theoretical analysis, we find that the efficiency of network malware propagation in this model depends on the mobility rate q of the nodes between subnets. We also find that there exists a mobility rate threshold qc. The network malware will spread in the SDN when the mobility rate q>qc. The malware will survive when q>qc and perish when q

  8. Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD Vision Sensor with Artificial Illumination

    Directory of Open Access Journals (Sweden)

    Juntao Xiong

    2018-03-01

    Full Text Available Night-time fruit-picking technology is important to picking robots. This paper proposes a method of night-time detection and picking-point positioning for green grape-picking robots to solve the difficult problem of green grape detection and picking in night-time conditions with artificial lighting systems. Taking a representative green grape named Centennial Seedless as the research object, daytime and night-time grape images were captured by a custom-designed visual system. Detection was conducted employing the following steps: (1 The RGB (red, green and blue. Color model was determined for night-time green grape detection through analysis of color features of grape images under daytime natural light and night-time artificial lighting. The R component of the RGB color model was rotated and the image resolution was compressed; (2 The improved Chan–Vese (C–V level set model and morphological processing method were used to remove the background of the image, leaving out the grape fruit; (3 Based on the character of grape vertical suspension, combining the principle of the minimum circumscribed rectangle of fruit and the Hough straight line detection method, straight-line fitting for the fruit stem was conducted and the picking point was calculated using the stem with an angle of fitting line and vertical line less than 15°. The visual detection experiment results showed that the accuracy of grape fruit detection was 91.67% and the average running time of the proposed algorithm was 0.46 s. The picking-point calculation experiment results showed that the highest accuracy for the picking-point calculation was 92.5%, while the lowest was 80%. The results demonstrate that the proposed method of night-time green grape detection and picking-point calculation can provide technical support to the grape-picking robots.

  9. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  10. Textural features and SUV-based variables assessed by dual time point 18F-FDG PET/CT in locally advanced breast cancer.

    Science.gov (United States)

    Garcia-Vicente, Ana María; Molina, David; Pérez-Beteta, Julián; Amo-Salas, Mariano; Martínez-González, Alicia; Bueno, Gloria; Tello-Galán, María Jesús; Soriano-Castrejón, Ángel

    2017-12-01

    To study the influence of dual time point 18F-FDG PET/CT in textural features and SUV-based variables and their relation among them. Fifty-six patients with locally advanced breast cancer (LABC) were prospectively included. All of them underwent a standard 18F-FDG PET/CT (PET-1) and a delayed acquisition (PET-2). After segmentation, SUV variables (SUVmax, SUVmean, and SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were obtained. Eighteen three-dimensional (3D) textural measures were computed including: run-length matrices (RLM) features, co-occurrence matrices (CM) features, and energies. Differences between all PET-derived variables obtained in PET-1 and PET-2 were studied. Significant differences were found between the SUV-based parameters and MTV obtained in the dual time point PET/CT, with higher values of SUV-based variables and lower MTV in the PET-2 with respect to the PET-1. In relation with the textural parameters obtained in dual time point acquisition, significant differences were found for the short run emphasis, low gray-level run emphasis, short run high gray-level emphasis, run percentage, long run emphasis, gray-level non-uniformity, homogeneity, and dissimilarity. Textural variables showed relations with MTV and TLG. Significant differences of textural features were found in dual time point 18F-FDG PET/CT. Thus, a dynamic behavior of metabolic characteristics should be expected, with higher heterogeneity in delayed PET acquisition compared with the standard PET. A greater heterogeneity was found in bigger tumors.

  11. On the Fibration Defined by the Field Lines of a Knotted Class of Electromagnetic Fields at a Particular Time

    Directory of Open Access Journals (Sweden)

    Manuel Arrayás

    2017-10-01

    Full Text Available A class of vacuum electromagnetic fields in which the field lines are knotted curves are reviewed. The class is obtained from two complex functions at a particular instant t = 0 so they inherit the topological properties of red the level curves of these functions. We study the complete topological structure defined by the magnetic and electric field lines at t = 0 . This structure is not conserved in time in general, although it is possible to red find special cases in which the field lines are topologically equivalent for every value of t.

  12. How Do Users Map Points Between Dissimilar Shapes?

    KAUST Repository

    Hecher, Michael

    2017-07-25

    Finding similar points in globally or locally similar shapes has been studied extensively through the use of various point descriptors or shape-matching methods. However, little work exists on finding similar points in dissimilar shapes. In this paper, we present the results of a study where users were given two dissimilar two-dimensional shapes and asked to map a given point in the first shape to the point in the second shape they consider most similar. We find that user mappings in this study correlate strongly with simple geometric relationships between points and shapes. To predict the probability distribution of user mappings between any pair of simple two-dimensional shapes, two distinct statistical models are defined using these relationships. We perform a thorough validation of the accuracy of these predictions and compare our models qualitatively and quantitatively to well-known shape-matching methods. Using our predictive models, we propose an approach to map objects or procedural content between different shapes in different design scenarios.

  13. Improving NPT safeguards. Particularly at the natural uranium starting point

    International Nuclear Information System (INIS)

    Harry, J.; Klerk, P. de

    1996-03-01

    According to the Non Proliferation Treaty (NPT) all nuclear material is subject to safeguards, but according to INFCIRC/153 the full range of safeguards is only applied beyond the 'starting point of safeguards', that is: The point at which nuclear material has reached a composition and purity suitable for fuel fabrication or enrichment. This paper addresses the two questions: (a) is the starting point adequately defined, and (b) what mesures could be applied to nuclear material before the starting point? Those questions have been asked before, some of the answers in this paper are new. (orig.)

  14. Improving NPT safeguards. Particularly at the natural uranium starting point

    Energy Technology Data Exchange (ETDEWEB)

    Harry, J. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klerk, P. de [Ministry of Foreign Affairs, The Hague (Netherlands)

    1996-03-01

    According to the Non Proliferation Treaty (NPT) all nuclear material is subject to safeguards, but according to INFCIRC/153 the full range of safeguards is only applied beyond the `starting point of safeguards`, that is: The point at which nuclear material has reached a composition and purity suitable for fuel fabrication or enrichment. This paper addresses the two questions: (a) is the starting point adequately defined, and (b) what mesures could be applied to nuclear material before the starting point? Those questions have been asked before, some of the answers in this paper are new. (orig.).

  15. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  16. A new integrated dual time-point amyloid PET/MRI data analysis method

    International Nuclear Information System (INIS)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara

    2017-01-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age

  17. A new integrated dual time-point amyloid PET/MRI data analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)

    2017-11-15

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between

  18. Evaluating Diagnostic Point-of-Care Tests in Resource-Limited Settings

    Science.gov (United States)

    Drain, Paul K; Hyle, Emily P; Noubary, Farzad; Freedberg, Kenneth A; Wilson, Douglas; Bishai, William; Rodriguez, William; Bassett, Ingrid V

    2014-01-01

    Diagnostic point-of-care (POC) testing is intended to minimize the time to obtain a test result, thereby allowing clinicians and patients to make an expeditious clinical decision. As POC tests expand into resource-limited settings (RLS), the benefits must outweigh the costs. To optimize POC testing in RLS, diagnostic POC tests need rigorous evaluations focused on relevant clinical outcomes and operational costs, which differ from evaluations of conventional diagnostic tests. Here, we reviewed published studies on POC testing in RLS, and found no clearly defined metric for the clinical utility of POC testing. Therefore, we propose a framework for evaluating POC tests, and suggest and define the term “test efficacy” to describe a diagnostic test’s capacity to support a clinical decision within its operational context. We also proposed revised criteria for an ideal diagnostic POC test in resource-limited settings. Through systematic evaluations, comparisons between centralized diagnostic testing and novel POC technologies can be more formalized, and health officials can better determine which POC technologies represent valuable additions to their clinical programs. PMID:24332389

  19. Flat Coalgebraic Fixed Point Logics

    Science.gov (United States)

    Schröder, Lutz; Venema, Yde

    Fixed point logics are widely used in computer science, in particular in artificial intelligence and concurrency. The most expressive logics of this type are the μ-calculus and its relatives. However, popular fixed point logics tend to trade expressivity for simplicity and readability, and in fact often live within the single variable fragment of the μ-calculus. The family of such flat fixed point logics includes, e.g., CTL, the *-nesting-free fragment of PDL, and the logic of common knowledge. Here, we extend this notion to the generic semantic framework of coalgebraic logic, thus covering a wide range of logics beyond the standard μ-calculus including, e.g., flat fragments of the graded μ-calculus and the alternating-time μ-calculus (such as ATL), as well as probabilistic and monotone fixed point logics. Our main results are completeness of the Kozen-Park axiomatization and a timed-out tableaux method that matches ExpTime upper bounds inherited from the coalgebraic μ-calculus but avoids using automata.

  20. Application of point system in the project control of Ling'ao Nuclear Power Station

    International Nuclear Information System (INIS)

    Xie ahai

    2005-01-01

    Schedule control and cost control are very complicated issues even we set up the detail schedules and engineering measurements requirements for erection of a nuclear power project. In order to solve these problems, a Point System is used in Ling Ao (LA) Nuclear Power Project. This paper introduces the method. The Point System is a measurement system of workload. The measurement unit of any erection works is a Point only. A Point of workload is defined as the equivalent measurement quantities, which could be completed by a relevant skill worker within an hour. A set of procedure manuals for different installations has been set up. The calculation models of equipment installation, piping, cabling are addressed for example in the paper. The application of the Point System in the schedule control is shown in the paper. The following issues are highlighted: to define the duration of a piping activity in the Project Level 2 Schedule, to draught the curves of Point Schedules for different erection fields, to analyze the productive efficiency, to define erection quota of each month for different erection teams, to follow up the erection progress on site. The application of the Point System in the payment of erection contract is outlined. The calculation formula of a monthly payment is given. The advantage of the payment calculation method is discussed, for example, more accurate, very easy and clearly to check the measurement quantities completed on site, to control lump-sum cost. (authors)

  1. Dynamics and mission design near libration points

    CERN Document Server

    Gómez, G; Jorba, A; Masdemont, J

    2001-01-01

    The aim of this book is to explain, analyze and compute the kinds of motions that appear in an extended vicinity of the geometrically defined equilateral points of the Earth-Moon system, as a source of possible nominal orbits for future space missions. The methodology developed here is not specific to astrodynamics problems. The techniques are developed in such a way that they can be used to study problems that can be modeled by dynamical systems. Contents: Global Stability Zones Around the Triangular Libration Points; The Normal Form Around L 5 in the Three-dimensional RTBP; Normal Form of th

  2. Influence of call broadcast timing within point counts and survey duration on detection probability of marsh breeding birds

    Directory of Open Access Journals (Sweden)

    Douglas C. Tozer

    2017-12-01

    Full Text Available The Standardized North American Marsh Bird Monitoring Protocol recommends point counts consisting of a 5-min passive observation period, meant to be free of broadcast bias, followed by call broadcasts to entice elusive species to reveal their presence. Prior to this protocol, some monitoring programs used point counts with broadcasts during the first 5 min of 10-min counts, and have since used 15-min counts with an initial 5-min passive period (P1 followed by 5 min of broadcasts (B and a second 5-min passive period (P2 to ensure consistency across years and programs. Influence of timing of broadcasts within point counts and point count duration, however, have rarely been assessed. Using data from 23,973 broadcast-assisted 15-min point counts conducted throughout the Great Lakes-St. Lawrence region between 2008 and 2016 by Bird Studies Canada's Marsh Monitoring Program and Central Michigan University's Great Lakes Coastal Wetland Monitoring Program, we estimated detection probabilities of individuals for 14 marsh breeding bird species during P1B compared to BP2, P1 compared to P2, and P1B compared to P1BP2. For six broadcast species and American Bittern (Botaurus lentiginosus, we found no significant difference in detection during P1B compared to BP2, and no significant difference in four of the same seven species during P1 compared to P2. We observed small but significant differences in detection for 7 of 14 species during P1B compared to P1BP2. We conclude that differences in timing of broadcasts causes no bias based on counts from entire 10-minute surveys, although P1B should be favored over BP2 because the same amount of effort in P1B avoids broadcast bias in all broadcast species, and 10-min surveys are superior to 15-min surveys because modest gains in detection of some species does not warrant the additional effort. We recommend point counts consisting of 5 min of passive observation followed by broadcasts, consistent with the standardized

  3. Imaging study on acupuncture points

    Science.gov (United States)

    Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.

    2009-09-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  4. Imaging study on acupuncture points

    International Nuclear Information System (INIS)

    Yan, X H; Zhang, X Y; Liu, C L; Dang, R S; Ando, M; Sugiyama, H; Chen, H S; Ding, G H

    2009-01-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  5. [Definition of ashi point from the view of linguistics].

    Science.gov (United States)

    Jiang, Shan; Zhao, Jingsheng

    2017-01-12

    The definition of ashi point has not been unified yet till now. Likewise, the precise explanation on its connotation has always been an elusive question in acupuncture theory. By collecting diverse definitions on ashi point in the textbooks of Acupuncture and Moxibustion , dictionaries and term standards, several rational elements in definitions with consensus were screened. With the assistance of two important theories of cognitive linguistics, such as figure-ground theory and distance iconicity theory, the concept of ashi point was newly defined. Additiona-lly, on the base of the understanding on several similar terms such as "taking the painful site as acupoint", "responding point" and "reactive point", the semanteme analytic method was used to distinguish the difference among them so that the more profound explorations on acupuncture therapy are expounded.

  6. Time dependence of the field energy densities surrounding sources: Application to scalar mesons near point sources and to electromagnetic fields near molecules

    International Nuclear Information System (INIS)

    Persico, F.; Power, E.A.

    1987-01-01

    The time dependence of the dressing-undressing process, i.e., the acquiring or losing by a source of a boson field intensity and hence of a field energy density in its neighborhood, is considered by examining some simple soluble models. First, the loss of the virtual field is followed in time when a point source is suddenly decoupled from a neutral scalar meson field. Second, an initially bare point source acquires a virtual meson cloud as the coupling is switched on. The third example is that of an initially bare molecule interacting with the vacuum of the electromagnetic field to acquire a virtual photon cloud. In all three cases the dressing-undressing is shown to take place within an expanding sphere of radius r = ct centered at the source. At each point in space the energy density tends, for large times, to that of the ground state of the total system. Differences in the time dependence of the dressing between the massive scalar field and the massless electromagnetic field are discussed. The results are also briefly discussed in the light of Feinberg's ideas on the nature of half-dressed states in quantum field theory

  7. Defining a National Web Sphere over time from the Perspectives of Collection, Technology and Scholarship

    DEFF Research Database (Denmark)

    Zierau, Eld; Brügger, Niels; Moesgaard, Jakob

    This paper describes a framework supporting definition of how to automatically identify national webpages outside a country’s top level domain. The framework aims at a definition that can be put into operation in order to make automatic detection of national web pages. At the same time...... the framework aims at a definition that can be reused independent of changed behaviours on the net, changes in jurisdiction and changes in technology. A crucial point in this framework is that the perspectives of collection, technology and Scholarship are present in decision making. The framework origins from...... harvests from the Danish national web archive, Netarkivet. However in both cases a definition of national webpages was needed. Thus the creation of the framework was a prerequisite for the rest of this study. Motivation of the study and framework is based on the fact that human communication activities...

  8. Terahertz time domain interferometry of a SIS tunnel junction and a quantum point contact

    Energy Technology Data Exchange (ETDEWEB)

    Karadi, Chandu [Univ. of California, Berkeley, CA (United States). Dept. of Physics

    1995-09-01

    The author has applied the Terahertz Time Domain Interferometric (THz-TDI) technique to probe the ultrafast dynamic response of a Superconducting-Insulating-Superconducting (SIS) tunnel junction and a Quantum Point Contact (QPC). The THz-TDI technique involves monitoring changes in the dc current induced by interfering two picosecond electrical pulses on the junction as a function of time delay between them. Measurements of the response of the Nb/AlOxNb SIS tunnel junction from 75--200 GHz are in full agreement with the linear theory for photon-assisted tunneling. Likewise, measurements of the induced current in a QPC as a function of source-drain voltage, gate voltage, frequency, and magnetic field also show strong evidence for photon-assisted transport. These experiments together demonstrate the general applicability of the THz-TDI technique to the characterization of the dynamic response of any micron or nanometer scale device that exhibits a non-linear I-V characteristic.

  9. Terahertz time domain interferometry of a SIS tunnel junction and a quantum point contact

    International Nuclear Information System (INIS)

    Karadi, C.; Lawrence Berkeley Lab., CA

    1995-09-01

    The author has applied the Terahertz Time Domain Interferometric (THz-TDI) technique to probe the ultrafast dynamic response of a Superconducting-Insulating-Superconducting (SIS) tunnel junction and a Quantum Point Contact (QPC). The THz-TDI technique involves monitoring changes in the dc current induced by interfering two picosecond electrical pulses on the junction as a function of time delay between them. Measurements of the response of the Nb/AlO x /Nb SIS tunnel junction from 75--200 GHz are in full agreement with the linear theory for photon-assisted tunneling. Likewise, measurements of the induced current in a QPC as a function of source-drain voltage, gate voltage, frequency, and magnetic field also show strong evidence for photon-assisted transport. These experiments together demonstrate the general applicability of the THz-TDI technique to the characterization of the dynamic response of any micron or nanometer scale device that exhibits a non-linear I-V characteristic. 133 refs., 49 figs

  10. Defining progression in nonmuscle invasive bladder cancer: it is time for a new, standard definition.

    Science.gov (United States)

    Lamm, Donald; Persad, Raj; Brausi, Maurizio; Buckley, Roger; Witjes, J Alfred; Palou, Joan; Böhle, Andreas; Kamat, Ashish M; Colombel, Marc; Soloway, Mark

    2014-01-01

    Despite being one of the most important clinical outcomes in nonmuscle invasive bladder cancer, there is currently no standard definition of disease progression. Major clinical trials and meta-analyses have used varying definitions or have failed to define this end point altogether. A standard definition of nonmuscle invasive bladder cancer progression as determined by reproducible and reliable procedures is needed. We examine current definitions of nonmuscle invasive bladder cancer progression, and propose a new definition that will be more clinically useful in determining patient prognosis and comparing treatment options. The IBCG (International Bladder Cancer Group) analyzed published clinical trials and meta-analyses that examined nonmuscle invasive bladder cancer progression as of December 2012. The limitations of the definitions of progression used in these trials were considered, as were additional parameters associated with the advancement of nonmuscle invasive bladder cancer. The most commonly used definition of nonmuscle invasive bladder cancer progression is an increase in stage from nonmuscle invasive to muscle invasive disease. Although this definition is clinically important, it fails to include other important parameters of advancing disease such as progression to lamina propria invasion and increase in grade. The IBCG proposes the definition of nonmuscle invasive bladder cancer progression as an increase in T stage from CIS or Ta to T1 (lamina propria invasion), development of T2 or greater or lymph node (N+) disease or distant metastasis (M1), or an increase in grade from low to high. Investigators should consider the use of this new definition to help standardize protocols and improve the reporting of progression. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  11. Parity-Time Symmetry and the Toy Models of Gain-Loss Dynamics near the Real Kato's Exceptional Points

    Czech Academy of Sciences Publication Activity Database

    Znojil, Miloslav

    2016-01-01

    Roč. 8, č. 6 (2016), s. 52 ISSN 2073-8994 R&D Projects: GA ČR GA16-22945S Institutional support: RVO:61389005 Keywords : parity-time symmetry * Schrodinger equation * physical Hilbert space * inner-product metric operator * real exceptional points * solvable models * quantum Big Bang * quantum Inflation period Subject RIV: BE - Theoretical Physics Impact factor: 1.457, year: 2016

  12. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  13. The goal of ape pointing.

    Science.gov (United States)

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  14. Dual-time-point O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine PET for grading of cerebral gliomas

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Philipp; Herzog, Hans; Rota Kops, Elena; Stoffels, Gabriele; Judov, Natalie; Filss, Christian; Tellmann, Lutz [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); Galldiks, Norbert [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); University of Cologne, Department of Neurology, Cologne (Germany); Weiss, Carolin [University of Cologne, Department of Neurosurgery, Cologne (Germany); Sabel, Michael [Heinrich-Heine University, Department of Neurosurgery, Duesseldorf (Germany); Coenen, Heinz Hubert [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Juelich (Germany); Shah, Nadim Jon [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Juelich (Germany); RWTH Aachen University Hospital, Department of Neurology, Aachen (Germany); Langen, Karl-Josef [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Nuclear Medicine, Aachen (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Juelich (Germany)

    2015-10-15

    We aimed to evaluate the diagnostic potential of dual-time-point imaging with positron emission tomography (PET) using O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine ({sup 18}F-FET) for non-invasive grading of cerebral gliomas compared with a dynamic approach. Thirty-six patients with histologically confirmed cerebral gliomas (21 primary, 15 recurrent; 24 high-grade, 12 low-grade) underwent dynamic PET from 0 to 50 min post-injection (p.i.) of {sup 18}F-FET, and additionally from 70 to 90 min p.i. Mean tumour-to-brain ratios (TBR{sub mean}) of {sup 18}F-FET uptake were determined in early (20-40 min p.i.) and late (70-90 min p.i.) examinations. Time-activity curves (TAC) of the tumours from 0 to 50 min after injection were assigned to different patterns. The diagnostic accuracy of changes of {sup 18}F-FET uptake between early and late examinations for tumour grading was compared to that of curve pattern analysis from 0 to 50 min p.i. of {sup 18}F-FET. The diagnostic accuracy of changes of the TBR{sub mean} of {sup 18}F-FET PET uptake between early and late examinations for the identification of HGG was 81 % (sensitivity 83 %; specificity 75 %; cutoff - 8 %; p < 0.001), and 83 % for curve pattern analysis (sensitivity 88 %; specificity 75 %; p < 0.001). Dual-time-point imaging of {sup 18}F-FET uptake in gliomas achieves diagnostic accuracy for tumour grading that is similar to the more time-consuming dynamic data acquisition protocol. (orig.)

  15. Defining the value framework for prostate brachytherapy using patient-centered outcome metrics and time-driven activity-based costing.

    Science.gov (United States)

    Thaker, Nikhil G; Pugh, Thomas J; Mahmood, Usama; Choi, Seungtaek; Spinks, Tracy E; Martin, Neil E; Sio, Terence T; Kudchadker, Rajat J; Kaplan, Robert S; Kuban, Deborah A; Swanson, David A; Orio, Peter F; Zelefsky, Michael J; Cox, Brett W; Potters, Louis; Buchholz, Thomas A; Feeley, Thomas W; Frank, Steven J

    2016-01-01

    Value, defined as outcomes over costs, has been proposed as a measure to evaluate prostate cancer (PCa) treatments. We analyzed standardized outcomes and time-driven activity-based costing (TDABC) for prostate brachytherapy (PBT) to define a value framework. Patients with low-risk PCa treated with low-dose-rate PBT between 1998 and 2009 were included. Outcomes were recorded according to the International Consortium for Health Outcomes Measurement standard set, which includes acute toxicity, patient-reported outcomes, and recurrence and survival outcomes. Patient-level costs to 1 year after PBT were collected using TDABC. Process mapping and radar chart analyses were conducted to visualize this value framework. A total of 238 men were eligible for analysis. Median age was 64 (range, 46-81). Median followup was 5 years (0.5-12.1). There were no acute Grade 3-5 complications. Expanded Prostate Cancer Index Composite 50 scores were favorable, with no clinically significant changes from baseline to last followup at 48 months for urinary incontinence/bother, bowel bother, sexual function, and vitality. Ten-year outcomes were favorable, including biochemical failure-free survival of 84.1%, metastasis-free survival 99.6%, PCa-specific survival 100%, and overall survival 88.6%. TDABC analysis demonstrated low resource utilization for PBT, with 41% and 10% of costs occurring in the operating room and with the MRI scan, respectively. The radar chart allowed direct visualization of outcomes and costs. We successfully created a visual framework to define the value of PBT using the International Consortium for Health Outcomes Measurement standard set and TDABC costs. PBT is associated with excellent outcomes and low costs. Widespread adoption of this methodology will enable value comparisons across providers, institutions, and treatment modalities. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. Defining the Value Framework for Prostate Brachytherapy using Patient-Centered Outcome Metrics and Time-Driven Activity-Based Costing

    Science.gov (United States)

    Thaker, Nikhil G.; Pugh, Thomas J.; Mahmood, Usama; Choi, Seungtaek; Spinks, Tracy E.; Martin, Neil E.; Sio, Terence T.; Kudchadker, Rajat J.; Kaplan, Robert S.; Kuban, Deborah A.; Swanson, David A.; Orio, Peter F.; Zelefsky, Michael J.; Cox, Brett W.; Potters, Louis; Buchholz, Thomas A.; Feeley, Thomas W.; Frank, Steven J.

    2017-01-01

    PURPOSE Value, defined as outcomes over costs, has been proposed as a measure to evaluate prostate cancer (PCa) treatments. We analyzed standardized outcomes and time-driven activity-based costing (TDABC) for prostate brachytherapy (PBT) to define a value framework. METHODS AND MATERIALS Patients with low-risk PCa treated with low-dose rate PBT between 1998 and 2009 were included. Outcomes were recorded according to the International Consortium for Health Outcomes Measurement (ICHOM) standard set, which includes acute toxicity, patient-reported outcomes, and recurrence and survival outcomes. Patient-level costs to one year after PBT were collected using TDABC. Process mapping and radar chart analyses were conducted to visualize this value framework. RESULTS A total of 238 men were eligible for analysis. Median age was 64 (range, 46–81). Median follow-up was 5 years (0.5–12.1). There were no acute grade 3–5 complications. EPIC-50 scores were favorable, with no clinically significant changes from baseline to last follow-up at 48 months for urinary incontinence/bother, bowel bother, sexual function, and vitality. Ten-year outcomes were favorable, including biochemical failure-free survival of 84.1%, metastasis-free survival 99.6%, PCa-specific survival 100%, and overall survival 88.6%. TDABC analysis demonstrated low resource utilization for PBT, with 41% and 10% of costs occurring in the operating room and with the MRI scan, respectively. The radar chart allowed direct visualization of outcomes and costs. CONCLUSIONS We successfully created a visual framework to define the value of PBT using the ICHOM standard set and TDABC costs. PBT is associated with excellent outcomes and low costs. Widespread adoption of this methodology will enable value comparisons across providers, institutions, and treatment modalities. PMID:26916105

  17. Deficient motion-defined and texture-defined figure-ground segregation in amblyopic children.

    Science.gov (United States)

    Wang, Jane; Ho, Cindy S; Giaschi, Deborah E

    2007-01-01

    Motion-defined form deficits in the fellow eye and the amblyopic eye of children with amblyopia implicate possible direction-selective motion processing or static figure-ground segregation deficits. Deficient motion-defined form perception in the fellow eye of amblyopic children may not be fully accounted for by a general motion processing deficit. This study investigates the contribution of figure-ground segregation deficits to the motion-defined form perception deficits in amblyopia. Performances of 6 amblyopic children (5 anisometropic, 1 anisostrabismic) and 32 control children with normal vision were assessed on motion-defined form, texture-defined form, and global motion tasks. Performance on motion-defined and texture-defined form tasks was significantly worse in amblyopic children than in control children. Performance on global motion tasks was not significantly different between the 2 groups. Faulty figure-ground segregation mechanisms are likely responsible for the observed motion-defined form perception deficits in amblyopia.

  18. Remotely-sensed, nocturnal, dew point correlates with malaria transmission in Southern Province, Zambia: a time-series study.

    Science.gov (United States)

    Nygren, David; Stoyanov, Cristina; Lewold, Clemens; Månsson, Fredrik; Miller, John; Kamanga, Aniset; Shiff, Clive J

    2014-06-13

    Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. In this

  19. Mean transit time image - a new method of analyzing brain perfusion studies

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, Z.; Ritzl, F.

    1983-05-01

    Point-by-point calculation of the mean transit time based on gamma fit was used to analyze brain perfusion studies in a vertex view. The algorithm and preliminary results in normal brain and in different stages of cerebral perfusion abnormality (ischemia, stroke, migraine, tumor, abscess) are demonstrated. In contrast to the traditional methods using fixed, a priori defined regions of interest this type of mapping of the relative regions cerebral perfusion shows more clearly the irregular outlines of the disturbance. Right to left activity ratios in the arterial part of the time-activity curves showed significant correlation with the mean transit time ratios (Q/sub 1/=1.185-0.192 Qsub(a), n=38, r=0.716, P<0.001).

  20. Measures of Noncircularity and Fixed Points of Contractive Multifunctions

    Directory of Open Access Journals (Sweden)

    Marrero Isabel

    2010-01-01

    Full Text Available In analogy to the Eisenfeld-Lakshmikantham measure of nonconvexity and the Hausdorff measure of noncompactness, we introduce two mutually equivalent measures of noncircularity for Banach spaces satisfying a Cantor type property, and apply them to establish a fixed point theorem of Darbo type for multifunctions. Namely, we prove that every multifunction with closed values, defined on a closed set and contractive with respect to any one of these measures, has the origin as a fixed point.

  1. SUVmax by dual time point FDG-PET/CT in recurrent breast cancer

    DEFF Research Database (Denmark)

    Hildebrandt, Malene; Blomberg, Björn; Falch Braas, Kirsten

    ROVER software without partial volume correction. Pearson correlation analysis and Bland-Altman statistics summarized the data. Results The overall mean SUVmax was 8.4 g/mL (SD=3.8) at 1h and 10.9 g/mL (SD=4.7) at 3h. The table displays the Pearson’s r and Bland-Altman statistics. The correlation...... between the two independent observers at 1h and 3h was relatively modest, with r statistics of 0.88 and 0.89, respectively. The intraobserver and intersoftware analyses showed a high correlation, with r statistics of 0.99 to 1.00. Conclusions The interobserver variability was small for both time points......, but slightly greater than the intraobserver and intersoftware variability. This could be explained by a difference in observer selection of FDG avid lesions in patients with multiple lesions. Table Analysis Pearson’s r (95% CI) Bland-Altman plot Mean difference (95% CI) 1h Interobserver 0.88 (0.82 – 0.92) -0...

  2. Young adult females' views regarding online privacy protection at two time points.

    Science.gov (United States)

    Moreno, Megan A; Kelleher, Erin; Ameenuddin, Nusheen; Rastogi, Sarah

    2014-09-01

    Risks associated with adolescent Internet use include exposure to inappropriate information and privacy violations. Privacy expectations and policies have changed over time. Recent Facebook security setting changes heighten these risks. The purpose of this study was to investigate views and experiences with Internet safety and privacy protection among older adolescent females at two time points, in 2009 and 2012. Two waves of focus groups were conducted, one in 2009 and the other in 2012. During these focus groups, female university students discussed Internet safety risks and strategies and privacy protection. All focus groups were audio recorded and manually transcribed. Qualitative analysis was conducted at the end of each wave and then reviewed and combined in a separate analysis using the constant comparative method. A total of 48 females participated across the two waves. The themes included (1) abundant urban myths, such as the ability for companies to access private information; (2) the importance of filtering one's displayed information; and (3) maintaining age limits on social media access to avoid younger teens' presence on Facebook. The findings present a complex picture of how adolescents view privacy protection and online safety. Older adolescents may be valuable partners in promoting safe and age-appropriate Internet use for younger teens in the changing landscape of privacy. Copyright © 2014. Published by Elsevier Inc.

  3. Deep muscle pain, tender points and recovery in acute whiplash patients: a 1-year follow-up study.

    Science.gov (United States)

    Kasch, Helge; Qerama, Erisela; Kongsted, Alice; Bach, Flemming W; Bendix, Tom; Jensen, Troels S

    2008-11-15

    Local sensitization to noxious stimuli has been previously described in acute whiplash injury and has been suggested to be a risk factor for chronic sequelae following acute whiplash injury. In this study, we prospectively examined the development of tender points and mechano-sensitivity in 157 acute whiplash injured patients, who fulfilled criteria for WAD grade 2 (n=153) or grade 3 (n=4) seen about 5 days after injury (4.8+/-2.3) and who subsequently had or had not recovered 1 year after a cervical sprain. Tender point scores and stimulus-response function for mechanical pressure were determined in injured and non-injured body regions at specific time-points after injury. Thirty-six of 157 WAD grade 2 patients (22.9%) had not recovered, defined as reduced work capacity after 1 year. Non-recovered patients had higher total tender point scores after 12 (pwhiplash injury and the development of further sensitization in patients with long-term disability.

  4. Time-dependent image potential at a metal surface

    International Nuclear Information System (INIS)

    Alducin, M.; Diez Muino, R.; Juaristi, J.I.

    2003-01-01

    Transient effects in the image potential induced by a point charge suddenly created in front of a metal surface are studied. The time evolution of the image potential is calculated using linear response theory. Two different time scales are defined: (i) the time required for the creation of the image potential and (ii) the time it takes to converge to its stationary value. Their dependence on the distance of the charge to the surface is discussed. The effect of the electron gas damping is also analyzed. For a typical metallic density, the order of magnitude of the creation time is 0.1 fs, whereas for a charge created close to the surface the convergence time is around 1-2 fs

  5. Defining the minimal detectable change in scores on the eight-item Morisky Medication Adherence Scale.

    Science.gov (United States)

    Muntner, Paul; Joyce, Cara; Holt, Elizabeth; He, Jiang; Morisky, Donald; Webber, Larry S; Krousel-Wood, Marie

    2011-05-01

    Self-report scales are used to assess medication adherence. Data on how to discriminate change in self-reported adherence over time from random variability are limited. To determine the minimal detectable change for scores on the 8-item Morisky Medication Adherence Scale (MMAS-8). The MMAS-8 was administered twice, using a standard telephone script, with administration separated by 14-22 days, to 210 participants taking antihypertensive medication in the CoSMO (Cohort Study of Medication Adherence among Older Adults). MMAS-8 scores were calculated and participants were grouped into previously defined categories (<6, 6 to <8, and 8 for low, medium, and high adherence). The mean (SD) age of participants was 78.1 (5.8) years, 43.8% were black, and 68.1% were women. Overall, 8.1% (17/210), 16.2% (34/210), and 51.0% (107/210) of participants had low, medium, and high MMAS-8 scores, respectively, at both survey administrations (overall agreement 75.2%; 158/210). The weighted κ statistic was 0.63 (95% CI 0.53 to 0.72). The intraclass correlation coefficient was 0.78. The within-person standard error of the mean for change in MMAS-8 scores was 0.81, which equated to a minimal detectable change of 1.98 points. Only 4.3% (9/210) of the participants had a change in MMAS-8 of 2 or more points between survey administrations. Within-person changes in MMAS-8 scores of 2 or more points over time may represent a real change in antihypertensive medication adherence.

  6. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  7. Quantifying [{sup 18}F]fluorodeoxyglucose uptake in the arterial wall: the effects of dual time-point imaging and partial volume effect correction

    Energy Technology Data Exchange (ETDEWEB)

    Blomberg, Bjoern A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Odense University Hospital, Department of Nuclear Medicine, Odense (Denmark); Bashyam, Arjun; Ramachandran, Abhinay; Gholami, Saeid; Houshmand, Sina; Salavati, Ali; Werner, Tom; Alavi, Abass [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands)

    2015-08-15

    The human arterial wall is smaller than the spatial resolution of current positron emission tomographs. Therefore, partial volume effects should be considered when quantifying arterial wall {sup 18}F-FDG uptake. We evaluated the impact of a novel method for partial volume effect (PVE) correction with contrast-enhanced CT (CECT) assistance on quantification of arterial wall {sup 18}F-FDG uptake at different imaging time-points. Ten subjects were assessed by CECT imaging and dual time-point PET/CT imaging at approximately 60 and 180 min after {sup 18}F-FDG administration. For both time-points, uptake of {sup 18}F-FDG was determined in the aortic wall by calculating the blood pool-corrected maximum standardized uptake value (cSUV{sub MAX}) and cSUV{sub MEAN}. The PVE-corrected SUV{sub MEAN} (pvcSUV{sub MEAN}) was also calculated using {sup 18}F-FDG PET/CT and CECT images. Finally, corresponding target-to-background ratios (TBR) were calculated. At 60 min, pvcSUV{sub MEAN} was on average 3.1 times greater than cSUV{sub MAX} (P <.0001) and 8.5 times greater than cSUV{sub MEAN} (P <.0001). At 180 min, pvcSUV{sub MEAN} was on average 2.6 times greater than cSUV{sub MAX} (P <.0001) and 6.6 times greater than cSUV{sub MEAN} (P <.0001). This study demonstrated that CECT-assisted PVE correction significantly influences quantification of arterial wall {sup 18}F-FDG uptake. Therefore, partial volume effects should be considered when quantifying arterial wall {sup 18}F-FDG uptake with PET. (orig.)

  8. Twin Positive Solutions of a Nonlinear m-Point Boundary Value Problem for Third-Order p-Laplacian Dynamic Equations on Time Scales

    Directory of Open Access Journals (Sweden)

    Wei Han

    2008-01-01

    Full Text Available Several existence theorems of twin positive solutions are established for a nonlinear m-point boundary value problem of third-order p-Laplacian dynamic equations on time scales by using a fixed point theorem. We present two theorems and four corollaries which generalize the results of related literature. As an application, an example to demonstrate our results is given. The obtained conditions are different from some known results.

  9. Identification of a time-varying point source in a system of two coupled linear diffusion-advection- reaction equations: application to surface water pollution

    International Nuclear Information System (INIS)

    Hamdi, Adel

    2009-01-01

    This paper deals with the identification of a point source (localization of its position and recovering the history of its time-varying intensity function) that constitutes the right-hand side of the first equation in a system of two coupled 1D linear transport equations. Assuming that the source intensity function vanishes before reaching the final control time, we prove the identifiability of the sought point source from recording the state relative to the second coupled transport equation at two observation points framing the source region. Note that at least one of the two observation points should be strategic. We establish an identification method that uses these records to identify the source position as the root of a continuous and strictly monotonic function. Whereas the source intensity function is recovered using a recursive formula without any need of an iterative process. Some numerical experiments on a variant of the surface water pollution BOD–OD coupled model are presented

  10. "Dermatitis" defined.

    Science.gov (United States)

    Smith, Suzanne M; Nedorost, Susan T

    2010-01-01

    The term "dermatitis" can be defined narrowly or broadly, clinically or histologically. A common and costly condition, dermatitis is underresourced compared to other chronic skin conditions. The lack of a collectively understood definition of dermatitis and its subcategories could be the primary barrier. To investigate how dermatologists define the term "dermatitis" and determine if a consensus on the definition of this term and other related terms exists. A seven-question survey of dermatologists nationwide was conducted. Of respondents (n  =  122), half consider dermatitis to be any inflammation of the skin. Nearly half (47.5%) use the term interchangeably with "eczema." Virtually all (> 96%) endorse the subcategory "atopic" under the terms "dermatitis" and "eczema," but the subcategories "contact," "drug hypersensitivity," and "occupational" are more highly endorsed under the term "dermatitis" than under the term "eczema." Over half (55.7%) personally consider "dermatitis" to have a broad meaning, and even more (62.3%) believe that dermatologists as a whole define the term broadly. There is a lack of consensus among experts in defining dermatitis, eczema, and their related subcategories.

  11. Duan's fixed point theorem: Proof and generalization

    Directory of Open Access Journals (Sweden)

    Arkowitz Martin

    2006-01-01

    Full Text Available Let be an H-space of the homotopy type of a connected, finite CW-complex, any map and the th power map. Duan proved that has a fixed point if . We give a new, short and elementary proof of this. We then use rational homotopy to generalize to spaces whose rational cohomology is the tensor product of an exterior algebra on odd dimensional generators with the tensor product of truncated polynomial algebras on even dimensional generators. The role of the power map is played by a -structure as defined by Hemmi-Morisugi-Ooshima. The conclusion is that and each has a fixed point.

  12. On the nature of time

    Directory of Open Access Journals (Sweden)

    M. Muraskin

    1990-01-01

    Full Text Available We study how the notion of time can affect the motion of particles within the No Integrability Aesthetic Field Theory. We show that the Minkowski hypothesis of treating x4 as pure imaginary as well as the fourth component of vectors as pure imaginary, does not lead to different solutions provided we alter the sign of dx4 and certain origin point field components. We next show that it is possible to introduce time in the situation where all fields are real so that: (1 The field equations treat all coordinates the same way; (2 The “flow” concept is associated with time but not with space; (3 Data is prescribed at a single point rather than on a hypersurface as in hyperbolic theories. We study the lattice solution in the approximation that ignores zig-zag paths. This enables us to investigate the effect of a non-trivial superposition principle in the simplest way. We find that such a system, combined with our new approach to time, gives rise to an apparent infinite particle system in which particles can be looked at as not having well defined trajectories. This result is similar to what we obtained when we treated time in the same manner as the space variables in our previous work.

  13. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  14. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    Science.gov (United States)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  15. Fermion Systems in Discrete Space-Time Exemplifying the Spontaneous Generation of a Causal Structure

    Science.gov (United States)

    Diethert, A.; Finster, F.; Schiefeneder, D.

    As toy models for space-time at the Planck scale, we consider examples of fermion systems in discrete space-time which are composed of one or two particles defined on two up to nine space-time points. We study the self-organization of the particles as described by a variational principle both analytically and numerically. We find an effect of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure.

  16. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    Science.gov (United States)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  17. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  18. Three key points along an intrinsic reaction coordinate

    Indian Academy of Sciences (India)

    Unknown

    Abstract. The concept of the reaction force is presented and discussed in detail. For typical processes with energy barriers, it has a universal form which defines three key points along an intrinsic reaction co- ordinate: the force minimum, zero and maximum. We suggest that the resulting four zones be interpreted as involving ...

  19. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  20. Bell inequalities and waiting times

    Energy Technology Data Exchange (ETDEWEB)

    Poeltl, Christina; Governale, Michele [School of Chemical and Physical Sciences and MacDiarmid Institute for Advanced Materials and Nanotechnology, Victoria University of Wellington, PO Box 600, Wellington 6140 (New Zealand)

    2015-07-01

    We propose a Bell test based on waiting time distributions for spin entangled electron pairs, which are generated and split in mesoscopic Coulomb blockade structures, denoted as entanglers. These systems have the advantage that quantum point contacts enable a time resolved observation of the electrons occupying the system, which gives access to quantities such as full counting statistics and waiting time distributions. We use the partial waiting times to define a CHSH-Bell test, which is a purely electronic analogue of the test used in quantum optics. After the introduction of the Bell inequality we discuss the findings on the two examples of a double quantum dot and a triple quantum dot. This Bell test allows the exclusion of irrelevant tunnel processes from the statistics normally used for the Bell correlations. This can improve the parameter range for which a violation of the Bell inequality can be measured significantly.

  1. Elastic-plastic adhesive contact of rough surfaces using n-point asperity model

    International Nuclear Information System (INIS)

    Sahoo, Prasanta; Mitra, Anirban; Saha, Kashinath

    2009-01-01

    This study considers an analysis of the elastic-plastic contact of rough surfaces in the presence of adhesion using an n-point asperity model. The multiple-point asperity model, developed by Hariri et al (2006 Trans ASME: J. Tribol. 128 505-14) is integrated into the elastic-plastic adhesive contact model developed by Roy Chowdhury and Ghosh (1994 Wear 174 9-19). This n-point asperity model differs from the conventional Greenwood and Williamson model (1966 Proc. R. Soc. Lond. A 295 300-19) in considering the asperities not as fixed entities but as those that change through the contact process, and hence it represents the asperities in a more realistic manner. The newly defined adhesion index and plasticity index defined for the n-point asperity model are used to consider the different conditions that arise because of varying load, surface and material parameters. A comparison between the load-separation behaviour of the new model and the conventional one shows a significant difference between the two depending on combinations of mean separation, adhesion index and plasticity index.

  2. Improving method of real-time offset tuning for arterial signal coordination using probe trajectory data

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2016-12-01

    Full Text Available In the environment of intelligent transportation systems, traffic condition data would have higher resolution in time and space, which is especially valuable for managing the interrupted traffic at signalized intersections. There exist a lot of algorithms for offset tuning, but few of them take the advantage of modern traffic detection methods such as probe vehicle data. This study proposes a method using probe trajectory data to optimize and adjust offsets in real time. The critical point, representing the changing vehicle dynamics, is first defined as the basis of this approach. Using the critical points related to different states of traffic conditions, such as free flow, queue formation, and dissipation, various traffic status parameters can be estimated, including actual travel speed, queue dissipation rate, and standing queue length. The offset can then be adjusted on a cycle-by-cycle basis. The performance of this approach is evaluated using a simulation network. The results show that the trajectory-based approach can reduce travel time of the coordinated traffic flow when compared with using well-defined offline offset.

  3. Point- and curve-based geometric conflation

    KAUST Repository

    Ló pez-Vá zquez, C.; Manso Callejo, M.A.

    2013-01-01

    Geometric conflation is the process undertaken to modify the coordinates of features in dataset A in order to match corresponding ones in dataset B. The overwhelming majority of the literature considers the use of points as features to define the transformation. In this article we present a procedure to consider one-dimensional curves also, which are commonly available as Global Navigation Satellite System (GNSS) tracks, routes, coastlines, and so on, in order to define the estimate of the displacements to be applied to each object in A. The procedure involves three steps, including the partial matching of corresponding curves, the computation of some analytical expression, and the addition of a correction term in order to satisfy basic cartographic rules. A numerical example is presented. © 2013 Copyright Taylor and Francis Group, LLC.

  4. Computing platforms for software-defined radio

    CERN Document Server

    Nurmi, Jari; Isoaho, Jouni; Garzia, Fabio

    2017-01-01

    This book addresses Software-Defined Radio (SDR) baseband processing from the computer architecture point of view, providing a detailed exploration of different computing platforms by classifying different approaches, highlighting the common features related to SDR requirements and by showing pros and cons of the proposed solutions. Coverage includes architectures exploiting parallelism by extending single-processor environment (such as VLIW, SIMD, TTA approaches), multi-core platforms distributing the computation to either a homogeneous array or a set of specialized heterogeneous processors, and architectures exploiting fine-grained, coarse-grained, or hybrid reconfigurability. Describes a computer engineering approach to SDR baseband processing hardware; Discusses implementation of numerous compute-intensive signal processing algorithms on single and multicore platforms; Enables deep understanding of optimization techniques related to power and energy consumption of multicore platforms using several basic a...

  5. Extracting biologically significant patterns from short time series gene expression data

    Directory of Open Access Journals (Sweden)

    McGinnis Thomas

    2009-08-01

    Full Text Available Abstract Background Time series gene expression data analysis is used widely to study the dynamics of various cell processes. Most of the time series data available today consist of few time points only, thus making the application of standard clustering techniques difficult. Results We developed two new algorithms that are capable of extracting biological patterns from short time point series gene expression data. The two algorithms, ASTRO and MiMeSR, are inspired by the rank order preserving framework and the minimum mean squared residue approach, respectively. However, ASTRO and MiMeSR differ from previous approaches in that they take advantage of the relatively few number of time points in order to reduce the problem from NP-hard to linear. Tested on well-defined short time expression data, we found that our approaches are robust to noise, as well as to random patterns, and that they can correctly detect the temporal expression profile of relevant functional categories. Evaluation of our methods was performed using Gene Ontology (GO annotations and chromatin immunoprecipitation (ChIP-chip data. Conclusion Our approaches generally outperform both standard clustering algorithms and algorithms designed specifically for clustering of short time series gene expression data. Both algorithms are available at http://www.benoslab.pitt.edu/astro/.

  6. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    Science.gov (United States)

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  7. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    Science.gov (United States)

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  8. The laboratory information float, time-based competition, and point-of-care testing.

    Science.gov (United States)

    Friedman, B A

    1994-01-01

    A new term, the laboratory information float, should be substituted for turnaround-time when evaluating the performance of the clinical laboratory because it includes the time necessary to make test results both available (ready to use) and accessible (easy to use) to clinicians ordering tests. The laboratory information float can be greatly reduced simply by telescoping the analytic phase of laboratory testing into the preanalytic phase. Significant costs are incurred by such a change, some of which can be reduced by developing a mobile clinical laboratory (sometimes referred to as a "lab-on-a-slab" or "rolling thunder") to transport the analytic devices directly to patient care units. The mobile clinical laboratory should be equipped with an integrated personal computer that can communicate continuously with the host laboratory information system and achieve some semblance of continuous flow processing despite test performance in point-of-care venues. Equipping clinicians with palmtop computers will allow the mobile clinician to access test results and order tests on the run. Such devices can be easily configured to operate in a passive mode, accessing relevant information automatically instead of forcing clinicians to query the laboratory information system periodically for the test results necessary to render care to their patients. The laboratory information float of the year 2,000 will surely be measured in minutes through the judicious deployment of relevant technology such as mobile clinical laboratories and palmtop computers.

  9. Point-like Particles in Fuzzy Space-time

    OpenAIRE

    Francis, Charles

    1999-01-01

    This paper is withdrawn as I am no longer using the term "fuzzy space- time" to describe the uncertainty in co-ordinate systems implicit in quantum logic. Nor am I using the interpretation that quantum logic can be regarded as a special case of fuzzy logic. This is because there are sufficient differences between quantum logic and fuzzy logic that the explanation is confusing. I give an interpretation of quantum logic in "A Theory of Quantum Space-time"

  10. PointCloudExplore 2: Visual exploration of 3D gene expression

    Energy Technology Data Exchange (ETDEWEB)

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  11. Nanomechanical displacement sensing using a quantum point contact

    International Nuclear Information System (INIS)

    Cleland, A.N.; Aldridge, J.S.; Driscoll, D.C.; Gossard, A. C.

    2002-01-01

    We describe a radio frequency mechanical resonator that includes a quantum point contact, defined using electrostatic top gates. We can mechanically actuate the resonator using either electrostatic or magnetomotive forces. We demonstrate the use of the quantum point contact as a displacement sensor, operating as a radio frequency mixer at the mechanical resonance frequency of 1.5 MHz. We calculate a displacement sensitivity of about 3x10 -12 m/Hz 1/2 . This device will potentially permit quantum-limited displacement sensing of nanometer-scale resonators, allowing the quantum entanglement of the electronic and mechanical degrees of freedom of a nanoscale system

  12. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    Directory of Open Access Journals (Sweden)

    Dragan Mijakoski

    2018-04-01

    CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  13. Iterative approximation of fixed points of nonexpansive mappings

    International Nuclear Information System (INIS)

    Chidume, C.E.; Chidume, C.O.

    2007-07-01

    Let K be a nonempty closed convex subset of a real Banach space E which has a uniformly Gateaux differentiable norm and T : K → K be a nonexpansive mapping with F(T) := { x element of K : Tx = x} ≠ 0 . For a fixed δ element of (0, 1), define S : K → K by Sx := (1- δ)x+ δ Tx , for all x element of K. Assume that { z t } converges strongly to a fixed point z of T as t → 0, where z t is the unique element of K which satisfies z t = tu + (1 - t)Tz t for arbitrary u element of K. Let {α n } be a real sequence in (0, 1) which satisfies the following conditions: C1 : lim α n = 0; C2 : Σαn = ∞. For arbitrary x 0 element of K, let the sequence { x n } be defined iteratively by x n+1 = α n u + (1 - α n )Sx n . Then, {x n } converges strongly to a fixed point of T. (author)

  14. Split delivery vehicle routing problem with time windows: a case study

    Science.gov (United States)

    Latiffianti, E.; Siswanto, N.; Firmandani, R. A.

    2018-04-01

    This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.

  15. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  16. The value of dual time point 18F-FDG PET imaging for the differentiation between malignant and benign lesions

    International Nuclear Information System (INIS)

    Lan, X.-L.; Zhang, Y.-X.; Wu, Z.-J.; Jia, Q.; Wei, H.; Gao, Z.-R.

    2008-01-01

    Aim: To assess the clinical value of dual time point 2-[ 18 F]-fluoro-2-deoxy-D-glucose positron emission tomography ( 18 F-FDG PET) imaging for the differentiation between malignant and benign lesions. Materials and methods: Ninety-six patients (28 patients with primary lung cancer, 18 patients with digestive system carcinoma, 13 patients with other malignant tumours, and 37 patients with benign lesions) underwent FDG-PET/CT at two time points: examination 1 at 45-55 min and examination 2 at 160 ± 24 (150-180) min after the intravenous injection of 233 ± 52 (185-370) MBq 18 F-FDG. Reconstructed images were evaluated qualitatively and quantitatively. The maximum standardized uptake values (SUVmax) of the lesions were calculated for both time points. An increase was considered to have occurred if the SUVs at examination 2 had increased by >10% as compared with those at the examination 1. Results: The lesions in 24 of 28 (86%) patients with primary lung cancer had an SUVmax ≥2.5 at examination 1. Of these, SUVmax values increased in 23 patients, but had not changed in one patient, at examination 2. The lesions in the other four patients with primary lung tumour had SUVmax values between 1.5 and 2.5 at examination 1, which were considered as suspected positive, increased SUVmax values were observed in three of these patients at examination 2. The malignant lesions in 17 of 18 patients with digestive system carcinoma showed SUVmax values ≥2.5 and only one patient had an SUVmax value 18 F-FDG PET imaging is an important noninvasive method for the differentiation of malignant and nonmalignant lesions

  17. GPU-accelerated element-free reverse-time migration with Gauss points partition

    Science.gov (United States)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  18. Investigation of the Behavior of the Co C Eutectic Fixed Point

    Science.gov (United States)

    Girard, F.; Battuello, M.; Florio, M.

    2007-12-01

    The behavior of the Co C eutectic fixed point was investigated at INRIM. Several cells of different design and volume, and filled with cobalt of different purity were constructed and investigated with both Pt/Pd thermocouples and radiation thermometers. The melting behavior was investigated with respect to the melting rate, the pre-freezing rate, and the annealing time. The melting temperatures, as defined, were not significantly affected by the different testing conditions, even if the shape and duration of the plateaux were influenced. Several tens of melt and freeze cycles were performed with the different cells. The spread in the results for all of the different conditions was very limited in extent, giving rise to a standard deviation of less than 0.04 °C; a repeatability of better than 0.02 °C was found with both Pt/Pd thermocouples and radiation thermometers. The results of our measurements are encouraging and confirm the suitability of Co C as a reference point for the high-temperature range in a possible future temperature scale. Investigations of long-term stability remain ongoing.

  19. Effects of point massage of liver and stomach channel combined with pith and trotter soup on postpartum lactation start time.

    Science.gov (United States)

    Luo, Qiong; Hu, Yin; Zhang, Hui

    2017-10-01

    Delay in lactation initiation causes maternal anxiety and subsequent adverse impact on maternal exclusive breast feeding. It is important to explore a safe and convenient way to promote lactation initiation. The feasibility of point massage of liver and stomach channel combined with pith and trotter soup on prevention of delayed lactation initiation was investigated in the present study. 320 women were enrolled and randomly divided into four groups, control group (80 women), point massage group (80 women), pith and trotter soup group (80 women), and massage + soup group (80 women) to compare the lactation initiation time. We found that women in point massage group, pith and trotter soup group and massage + soup group had earlier initiation of lactation compared with control group. Women in massage + soup group had the earliest initiation time of lactation. There were significant differences between massage + soup group and pith and trotter soup group. But, there were no significant differences between massage + soup group and massage group. We conclude that point massage of the liver and stomach channel is easy to operate and has the preventive effect on delayed lactation initiation. Impact statement What is already known on this subject: Initiation of lactation is a critical period in postpartum milk secretion. Delays in lactation initiation lead to maternal anxiety and have an adverse impact on maternal exclusive breastfeeding. Sucking frequently by babies and mammary massage might be effective but insufficient for delayed lactation initiation. What the results of this study add: We found in the present study that lactation initiation is significantly earlier in women receiving routine nursing combined with point massage of liver and stomach channel, or pith trotters soup, or massage of liver and stomach channel with pith and trotters soup than in a control group receiving routine nursing. These three methods are all effective, while the most

  20. Fixed Points of Expansive Type Mappings in 2-Banach Spaces

    Directory of Open Access Journals (Sweden)

    Prabha Chouhan

    2013-08-01

    Full Text Available In present paper, we define expansive mappings in 2-Banach space and prove some common unique fixed point theorems which are the extension of results of Wang et al. [12] and Rhoades [9] in 2-Banach space.

  1. Best Proximity Point Results in Complex Valued Metric Spaces

    Directory of Open Access Journals (Sweden)

    Binayak S. Choudhury

    2014-01-01

    complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.

  2. Positive facial expressions during retrieval of self-defining memories.

    Science.gov (United States)

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  3. Time perception in patients with major depressive disorder during vagus nerve stimulation.

    Science.gov (United States)

    Biermann, T; Kreil, S; Groemer, T W; Maihöfner, C; Richter-Schmiedinger, T; Kornhuber, J; Sperling, W

    2011-07-01

    Affective disorders may affect patients' time perception. Several studies have described time as a function of the frontal lobe. The activating eff ects of vagus nerve stimulation on the frontal lobe might also modulate time perception in patients with major depressive disorder (MDD). Time perception was investigated in 30 patients with MDD and in 7 patients with therapy-resistant MDD. In these 7 patients, a VNS system was implanted and time perception was assessed before and during stimulation. A time estimation task in which patients were asked "How many seconds have passed?" tested time perception at 4 defined time points (34 s, 77 s, 192 s and 230 s). The differences between the estimated and actual durations were calculated and used for subsequent analysis. Patients with MDD and healthy controls estimated the set time points relatively accurately. A general linear model revealed a significant main eff ect of group but not of age or sex. The passing of time was perceived as significantly slower in patients undergoing VNS compared to patients with MDD at all time points (T34: t = − 4.2; df = 35; p differences in time perception with regard to age, sex or polarity of depression (uni- or bipolar). VNS is capable of changing the perception of time. This discovery furthers the basic research on circadian rhythms in patients with psychiatric disorders.

  4. Dosimetric analysis at ICRU reference points in HDR-brachytherapy of cervical carcinoma.

    Science.gov (United States)

    Eich, H T; Haverkamp, U; Micke, O; Prott, F J; Müller, R P

    2000-01-01

    In vivo dosimetry in bladder and rectum as well as determining doses on suggested reference points following the ICRU report 38 contribute to quality assurance in HDR-brachytherapy of cervical carcinoma, especially to minimize side effects. In order to gain information regarding the radiation exposure at ICRU reference points in rectum, bladder, ureter and regional lymph nodes those were calculated (digitalisation) by means of orthogonal radiographs of 11 applications in patients with cervical carcinoma, who received primary radiotherapy. In addition, the doses at the ICRU rectum reference point was compared to the results of in vivo measurements in the rectum. The in vivo measurements were by factor 1.5 below the doses determined for the ICRU rectum reference point (4.05 +/- 0.68 Gy versus 6.11 +/- 1.63 Gy). Reasons for this were: calibration errors, non-orthogonal radiographs, movement of applicator and probe in the time span between X-ray and application, missing connection of probe and anterior rectal wall. The standard deviation of calculations at ICRU reference points was on average +/- 30%. Possible reasons for the relatively large standard deviation were difficulties in defining the points, identifying them on radiographs and the different locations of the applicators. Although 3 D CT, US or MR based treatment planning using dose volume histogram analysis is more and more established, this simple procedure of marking and digitising the ICRU reference points lengthened treatment planning only by 5 to 10 minutes. The advantages of in vivo dosimetry are easy practicability and the possibility to determine rectum doses during radiation. The advantages of computer-aided planning at ICRU reference points are that calculations are available before radiation and that they can still be taken into account for treatment planning. Both methods should be applied in HDR-brachytherapy of cervical carcinoma.

  5. Automatic entry point planning for robotic post-mortem CT-based needle placement.

    Science.gov (United States)

    Ebert, Lars C; Fürst, Martin; Ptacek, Wolfgang; Ruder, Thomas D; Gascho, Dominic; Schweitzer, Wolf; Thali, Michael J; Flach, Patricia M

    2016-09-01

    Post-mortem computed tomography guided placement of co-axial introducer needles allows for the extraction of tissue and liquid samples for histological and toxicological analyses. Automation of this process can increase the accuracy and speed of the needle placement, thereby making it more feasible for routine examinations. To speed up the planning process and increase safety, we developed an algorithm that calculates an optimal entry point and end-effector orientation for a given target point, while taking constraints such as accessibility or bone collisions into account. The algorithm identifies the best entry point for needle trajectories in three steps. First, the source CT data is prepared and bone as well as surface data are extracted and optimized. All vertices of the generated surface polygon are considered to be potential entry points. Second, all surface points are tested for validity within the defined hard constraints (reachability, bone collision as well as collision with other needles) and removed if invalid. All remaining vertices are reachable entry points and are rated with respect to needle insertion angle. Third, the vertex with the highest rating is selected as the final entry point, and the best end-effector rotation is calculated to avoid collisions with the body and already set needles. In most cases, the algorithm is sufficiently fast with approximately 5-6 s per entry point. This is the case if there is no collision between the end-effector and the body. If the end-effector has to be rotated to avoid collision, calculation times can increase up to 24 s due to the inefficient collision detection used here. In conclusion, the algorithm allows for fast and facilitated trajectory planning in forensic imaging.

  6. A Time to Define: Making the Specific Learning Disability Definition Prescribe Specific Learning Disability

    Science.gov (United States)

    Kavale, Kenneth A.; Spaulding, Lucinda S.; Beam, Andrea P.

    2009-01-01

    Unlike other special education categories defined in U.S. law (Individuals with Disabilities Education Act), the definition of specific learning disability (SLD) has not changed since first proposed in 1968. Thus, although the operational definition of SLD has responded to new knowledge and understanding about the construct, the formal definition…

  7. Getting It Right the First Time: Defining Regionally Relevant Training Curricula and Provider Core Competencies for Point-of-Care Ultrasound Education on the African Continent.

    Science.gov (United States)

    Salmon, Margaret; Landes, Megan; Hunchak, Cheryl; Paluku, Justin; Malemo Kalisya, Luc; Salmon, Christian; Muller, Mundenga Mutendi; Wachira, Benjamin; Mangan, James; Chhaganlal, Kajal; Kalanzi, Joseph; Azazh, Aklilu; Berman, Sara; Zied, El-Sayed; Lamprecht, Hein

    2017-02-01

    Significant evidence identifies point-of-care ultrasound (PoCUS) as an important diagnostic and therapeutic tool in resource-limited settings. Despite this evidence, local health care providers on the African continent continue to have limited access to and use of ultrasound, even in potentially high-impact fields such as obstetrics and trauma. Dedicated postgraduate emergency medicine residency training programs now exist in 8 countries, yet no current consensus exists in regard to core PoCUS competencies. The current practice of transferring resource-rich PoCUS curricula and delivery methods to resource-limited health systems fails to acknowledge the unique challenges, needs, and disease burdens of recipient systems. As emergency medicine leaders from 8 African countries, we introduce a practical algorithmic approach, based on the local epidemiology and resource constraints, to curriculum development and implementation. We describe an organizational structure composed of nexus learning centers for PoCUS learners and champions on the continent to keep credentialing rigorous and standardized. Finally, we put forth 5 key strategic considerations: to link training programs to hospital systems, to prioritize longitudinal learning models, to share resources to promote health equity, to maximize access, and to develop a regional consensus on training standards and credentialing. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  8. Time-lapse analysis of methane quantity in Mary Lee group of coal seams using filter-based multiple-point geostatistical simulation

    Science.gov (United States)

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    Coal seam degasification and its success are important for controlling methane, and thus for the health and safety of coal miners. During the course of degasification, properties of coal seams change. Thus, the changes in coal reservoir conditions and in-place gas content as well as methane emission potential into mines should be evaluated by examining time-dependent changes and the presence of major heterogeneities and geological discontinuities in the field. In this work, time-lapsed reservoir and fluid storage properties of the New Castle coal seam, Mary Lee/Blue Creek seam, and Jagger seam of Black Warrior Basin, Alabama, were determined from gas and water production history matching and production forecasting of vertical degasification wellbores. These properties were combined with isotherm and other important data to compute gas-in-place (GIP) and its change with time at borehole locations. Time-lapsed training images (TIs) of GIP and GIP difference corresponding to each coal and date were generated by using these point-wise data and Voronoi decomposition on the TI grid, which included faults as discontinuities for expansion of Voronoi regions. Filter-based multiple-point geostatistical simulations, which were preferred in this study due to anisotropies and discontinuities in the area, were used to predict time-lapsed GIP distributions within the study area. Performed simulations were used for mapping spatial time-lapsed methane quantities as well as their uncertainties within the study area.

  9. Time-resolved studies define the nature of toxic IAPP intermediates, providing insight for anti-amyloidosis therapeutics

    Science.gov (United States)

    Abedini, Andisheh; Plesner, Annette; Cao, Ping; Ridgway, Zachary; Zhang, Jinghua; Tu, Ling-Hsien; Middleton, Chris T; Chao, Brian; Sartori, Daniel J; Meng, Fanling; Wang, Hui; Wong, Amy G; Zanni, Martin T; Verchere, C Bruce; Raleigh, Daniel P; Schmidt, Ann Marie

    2016-01-01

    Islet amyloidosis by IAPP contributes to pancreatic β-cell death in diabetes, but the nature of toxic IAPP species remains elusive. Using concurrent time-resolved biophysical and biological measurements, we define the toxic species produced during IAPP amyloid formation and link their properties to induction of rat INS-1 β-cell and murine islet toxicity. These globally flexible, low order oligomers upregulate pro-inflammatory markers and induce reactive oxygen species. They do not bind 1-anilnonaphthalene-8-sulphonic acid and lack extensive β-sheet structure. Aromatic interactions modulate, but are not required for toxicity. Not all IAPP oligomers are toxic; toxicity depends on their partially structured conformational states. Some anti-amyloid agents paradoxically prolong cytotoxicity by prolonging the lifetime of the toxic species. The data highlight the distinguishing properties of toxic IAPP oligomers and the common features that they share with toxic species reported for other amyloidogenic polypeptides, providing information for rational drug design to treat IAPP induced β-cell death. DOI: http://dx.doi.org/10.7554/eLife.12977.001 PMID:27213520

  10. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Directory of Open Access Journals (Sweden)

    Menard Daniel

    2006-01-01

    Full Text Available Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  11. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Science.gov (United States)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  12. SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry

    International Nuclear Information System (INIS)

    Disney, R.K.; Vogtman, S.E.

    1987-01-01

    1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total

  13. Biological reference points for fish stocks in a multispecies context

    DEFF Research Database (Denmark)

    Collie, J.S.; Gislason, Henrik

    2001-01-01

    Biological reference points (BRPs) are widely used to define safe levels of harvesting for marine fish populations. Most BRPs are either minimum acceptable biomass levels or maximum fishing mortality rates. The values of BRPs are determined from historical abundance data and the life...

  14. On stability of fixed points and chaos in fractional systems.

    Science.gov (United States)

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0chaos is impossible in the corresponding continuous fractional systems.

  15. Evaluation of articular cartilage in patients with femoroacetabular impingement (FAI) using T2* mapping at different time points at 3.0 Tesla MRI: a feasibility study

    International Nuclear Information System (INIS)

    Apprich, S.; Mamisch, T.C.; Welsch, G.H.; Bonel, H.; Siebenrock, K.A.; Dudda, M.; Kim, Y.J.; Trattnig, S.

    2012-01-01

    To define the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration prior to surgery in patients with symptomatic femoroacetabular impingement (FAI), we compared cartilage of the hip joint in patients with FAI and healthy volunteers using T2* mapping at 3.0 Tesla over time. Twenty-two patients (13 females and 9 males; mean age 28.1 years) with clinical signs of FAI and Toennis grade ≤ 1 on anterior-posterior x-ray and 35 healthy age-matched volunteers were examined at a 3 T MRI using a flexible body coil. T2* maps were calculated from sagittal- and coronal-oriented gradient-multi-echo sequences using six echoes (TR 125, TE 4.41/8.49/12.57/16.65/20.73/24.81, scan time 4.02 min), both measured at beginning and end of the scan (45 min time span between measurements). Region of interest analysis was manually performed on four consecutive slices for superior and anterior cartilage. Mean T2* values were compared among patients and volunteers, as well as over time using analysis of variance and Student's t-test. Whereas quantitative T2* values for the first measurement did not reveal significant differences between patients and volunteers, either for sagittal (p = 0.644) or coronal images (p = 0.987), at the first measurement, a highly significant difference (p ≤ 0.004) was found for both measurements with time after unloading of the joint. Over time we found decreasing mean T2* values for patients, in contrast to increasing mean T2* relaxation times in volunteers. The study proved the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration in the hip joint in FAI patients at 3 Tesla to predict possible success of joint-preserving surgery. However, we suggest the time point for measuring T2* as an MR biomarker for cartilage and the changes in T2* over time to be of crucial importance for designing an MR protocol in patients with FAI. (orig.)

  16. Plant responses, climate pivot points, and trade-offs in water-limited ecosystems

    Science.gov (United States)

    Munson, S. M.; Bunting, E.

    2017-12-01

    Ecosystem transitions and thresholds are conceptually well-defined and have become a framework to address vegetation response to climate change and land-use intensification, yet there are few approaches to define the environmental conditions which can lead to them. We demonstrate a novel climate pivot point approach using long-term monitoring data from a broad network of permanent plots, satellite imagery, and experimental treatments across the southwestern U.S. The climate pivot point identifies conditions that lead to decreased plant performance and serves as an early warning sign of increased vulnerability of crossing a threshold into an altered ecosystem state. Plant responses and climate pivot points aligned with the lifespan and structural characteristics of species, were modified by soil and landscape attributes of a site, and had non-linear dynamics in some cases. Species with strong increases in abundance when water was available were most susceptible to losses during water shortages, reinforcing plant energetic and physiological tradeoffs. Future research to uncover the heterogeneity of plant responses and climate pivot points at multiple scales can lead to greater understanding of shifts in ecosystem productivity and vulnerability to climate change.

  17. Administrative Lead Time at Navy Inventory Control Points

    National Research Council Canada - National Science Library

    Granetto, Paul

    1994-01-01

    .... We also evaluated the internal controls established for administrative lead time and the adequacy of management's implementation of the DoD Internal Management Control Program for monitoring administrative lead time...

  18. Study of the stochastic point reactor kinetic equation

    International Nuclear Information System (INIS)

    Gotoh, Yorio

    1980-01-01

    Diagrammatic technique is used to solve the stochastic point reactor kinetic equation. The method gives exact results which are derived from Fokker-Plank theory. A Green's function dressed with the clouds of noise is defined, which is a transfer function of point reactor with fluctuating reactivity. An integral equation for the correlation function of neutron power is derived using the following assumptions: 1) Green's funntion should be dressed with noise, 2) The ladder type diagrams only contributes to the correlation function. For a white noise and the one delayed neutron group approximation, the norm of the integral equation and the variance to mean-squared ratio are analytically obtained. (author)

  19. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-01-01

    Objective Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to (a) catalog feasibility measures/metrics and (b) propose a framework. Methods For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. Findings We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Conclusions Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization. PMID:29333105

  20. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    Science.gov (United States)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  1. (Re)Defining Salesperson Motivation

    DEFF Research Database (Denmark)

    Khusainova, Rushana; de Jong, Ad; Lee, Nick

    2018-01-01

    The construct of motivation is one of the central themes in selling and sales management research. Yet, to-date no review article exists that surveys the construct (both from an extrinsic and intrinsic motivation context), critically evaluates its current status, examines various key challenges...... apparent from the extant research, and suggests new research opportunities based on a thorough review of past work. The authors explore how motivation is defined, major theories underpinning motivation, how motivation has historically been measured, and key methodologies used over time. In addition......, attention is given to principal drivers and outcomes of salesperson motivation. A summarizing appendix of key articles in salesperson motivation is provided....

  2. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  3. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    Science.gov (United States)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  4. Time-lapse misorientation maps for the analysis of electron backscatter diffraction data from evolving microstructures

    International Nuclear Information System (INIS)

    Wheeler, J.; Cross, A.; Drury, M.; Hough, R.M.; Mariani, E.; Piazolo, S.; Prior, D.J.

    2011-01-01

    A 'time-lapse misorientation map' is defined here as a map which shows the orientation change at each point in an evolving crystalline microstructure between two different times. Electron backscatter diffraction data from in situ heating experiments can be used to produce such maps, which then highlight areas of microstructural change and also yield statistics indicative of how far different types of boundary (with different misorientations) have moved.

  5. Electrical conduction through surface superstructures measured by microscopic four-point probes

    DEFF Research Database (Denmark)

    Hasegawa, S.; Shiraki, I.; Tanabe, F.

    2003-01-01

    For in-situ measurements of the local electrical conductivity of well-defined crystal surfaces in ultra-high vacuum, we have developed two kinds of microscopic four-point probe methods. One involves a "four-tip STM prober," in which four independently driven tips of a scanning tunneling microscope...... (STM) are used for measurements of four-point probe conductivity. The probe spacing can be changed from 500 nm to 1 mm. The other method involves monolithic micro-four-point probes, fabricated on silicon chips, whose probe spacing is fixed around several mum. These probes are installed in scanning...

  6. INTRODUCTION Dental care utilization can be defined as the ...

    African Journals Online (AJOL)

    INTRODUCTION. Dental care utilization can be defined as the percentage of the population who access dental services over a specified period of time1. Measures of actual dental care utilization describe the percentage of the population who have seen a dentist at different time intervals. Dental disease is a serious public ...

  7. Association between time to disease progression end points and overall survival in patients with neuroendocrine tumors

    Directory of Open Access Journals (Sweden)

    Singh S

    2014-08-01

    Full Text Available Simron Singh,1 Xufang Wang,2 Calvin HL Law1 1Sunnybrook Odette Cancer Center, University of Toronto, Toronto, ON, Canada; 2Novartis Oncology, Florham Park, NJ, USA Abstract: Overall survival can be difficult to determine for slowly progressing malignancies, such as neuroendocrine tumors. We investigated whether time to disease progression is positively associated with overall survival in patients with such tumors. A literature review identified 22 clinical trials in patients with neuroendocrine tumors that reported survival probabilities for both time to disease progression (progression-free survival and time to progression and overall survival. Associations between median time to disease progression and median overall survival and between treatment effects on time to disease progression and treatment effects on overall survival were analyzed using weighted least-squares regression. Median time to disease progression was significantly associated with median overall survival (coefficient 0.595; P=0.022. In the seven randomized studies identified, the risk reduction for time to disease progression was positively associated with the risk reduction for overall survival (coefficient on −ln[HR] 0.151; 95% confidence interval −0.843, 1.145; P=0.713. The significant association between median time to disease progression and median overall survival supports the assertion that time to disease progression is an alternative end point to overall survival in patients with neuroendocrine tumors. An apparent albeit not significant trend correlates treatment effects on time to disease progression and treatment effects on overall survival. Informal surveys of physicians’ perceptions are consistent with these concepts, although additional randomized trials are needed. Keywords: neuroendocrine tumors, progression-free survival, disease progression, mortality

  8. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  9. TRANSIT TIME THROUGH THE BORDER-CROSSING POINTS: THE CASE STUDY OF THE EU'S ROAD BCP WITH MOLDAVIA

    Directory of Open Access Journals (Sweden)

    Mihaela POPA

    2016-06-01

    Full Text Available . This paper provides an overview on the ACROSSEE project (funded by the ERDF under the SEE program, its objectives, general methodology, and the main results. Moreover, the main survey results are presented as they relate to the Romanian road and rail border-crossing points (BCPs with the Eastern neighbourly country, Moldavia. Results include the actual status of the road BCP, surveys on exiting and entering traffic flows, answers to specific questionnaires of the truck and car drivers in relation with origin-destination of the trips, average waiting time in queue and time required for procedures and controls. Discussion on the mathematical modelling is presented and a provisional simulation model is developed, using ARENA software. The early results are used in order to introduce the need for future assessment of the transit time in a road BCP, with the main purpose: the substantiation of the strategic and operational actions for improvements in trade and transport crossings of the EU borders, considering the need for more vigilant and less time-consuming checks at the outside borders of the EU.

  10. Moving window segmentation framework for point clouds

    NARCIS (Netherlands)

    Sithole, G.; Gorte, B.G.H.

    2012-01-01

    As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first

  11. 38 CFR 17.31 - Duty periods defined.

    Science.gov (United States)

    2010-07-01

    ... Definitions and Active Duty § 17.31 Duty periods defined. Full-time duty as a member of the Women's Army Auxiliary Corps, Women's Reserve of the Navy and Marine Corps and Women's Reserve of the Coast Guard. [34 FR..., 1996, § 17.31(b)(5) was redesignated as § 17.31. Protection of Patient Rights ...

  12. Defining a procedure for predicting the duration of the approximately isothermal segments within the proposed drying regime as a function of the drying air parameters

    Science.gov (United States)

    Vasić, M.; Radojević, Z.

    2017-08-01

    One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.

  13. Common fixed points of single-valued and multivalued maps

    Directory of Open Access Journals (Sweden)

    Yicheng Liu

    2005-01-01

    Full Text Available We define a new property which contains the property (EA for a hybrid pair of single- and multivalued maps and give some new common fixed point theorems under hybrid contractive conditions. Our results extend previous ones. As an application, we give a partial answer to the problem raised by Singh and Mishra.

  14. Eclipse-Free-Time Assessment Tool for IRIS

    Science.gov (United States)

    Eagle, David

    2012-01-01

    IRIS_EFT is a scientific simulation that can be used to perform an Eclipse-Free- Time (EFT) assessment of IRIS (Infrared Imaging Surveyor) mission orbits. EFT is defined to be those time intervals longer than one day during which the IRIS spacecraft is not in the Earth s shadow. Program IRIS_EFT implements a special perturbation of orbital motion to numerically integrate Cowell's form of the system of differential equations. Shadow conditions are predicted by embedding this integrator within Brent s method for finding the root of a nonlinear equation. The IRIS_EFT software models the effects of the following types of orbit perturbations on the long-term evolution and shadow characteristics of IRIS mission orbits. (1) Non-spherical Earth gravity, (2) Atmospheric drag, (3) Point-mass gravity of the Sun, and (4) Point-mass gravity of the Moon. The objective of this effort was to create an in-house computer program that would perform eclipse-free-time analysis. of candidate IRIS spacecraft mission orbits in an accurate and timely fashion. The software is a suite of Fortran subroutines and data files organized as a "computational" engine that is used to accurately predict the long-term orbit evolution of IRIS mission orbits while searching for Earth shadow conditions.

  15. Estimation of main diversification time-points of hantaviruses using phylogenetic analyses of complete genomes.

    Science.gov (United States)

    Castel, Guillaume; Tordo, Noël; Plyusnin, Alexander

    2017-04-02

    Because of the great variability of their reservoir hosts, hantaviruses are excellent models to evaluate the dynamics of virus-host co-evolution. Intriguing questions remain about the timescale of the diversification events that influenced this evolution. In this paper we attempted to estimate the first ever timing of hantavirus diversification based on thirty five available complete genomes representing five major groups of hantaviruses and the assumption of co-speciation of hantaviruses with their respective mammal hosts. Phylogenetic analyses were used to estimate the main diversification points during hantavirus evolution in mammals while host diversification was mostly estimated from independent calibrators taken from fossil records. Our results support an earlier developed hypothesis of co-speciation of known hantaviruses with their respective mammal hosts and hence a common ancestor for all hantaviruses carried by placental mammals. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    Science.gov (United States)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions

  17. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  18. A New Class of Analytic Functions Defined by Using Salagean Operator

    Directory of Open Access Journals (Sweden)

    R. M. El-Ashwah

    2013-01-01

    Full Text Available We derive some results for a new class of analytic functions defined by using Salagean operator. We give some properties of functions in this class and obtain numerous sharp results including for example, coefficient estimates, distortion theorem, radii of star-likeness, convexity, close-to-convexity, extreme points, integral means inequalities, and partial sums of functions belonging to this class. Finally, we give an application involving certain fractional calculus operators that are also considered.

  19. The Clinical Role of Dual-Time-Point {sup 18}F-FDG PET/CT in Differential Diagnosis of the Thyroid Incidentaloma

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sinae; Park, Taegyu; Park, Soyeon; Pahk, Kisoo; Rhee, Seunghong; Cho, Jaehyuk; Jeong, Eugene; Kim, Sungeun; Choe, Jae Gol [Korea Univ., Seoul (Korea, Republic of)

    2014-06-15

    Thyroid incidentalomas are common findings during imaging studies including {sup 18}F-fluorodeoxyglucose ({sup 18}F-FDG) positron emission tomography/computed tomography (PET/CT) for cancer evaluation. Although the overall incidence of incidental thyroid uptake detected on PET imaging is low, clinical attention should be warranted owing to the high incidence of harboring primary thyroid malignancy.We retrospectively reviewed 2,368 dual-time-point {sup 18}F-FDG PET/CT cases that were undertaken for cancer evaluation from November 2007 to February 2009, to determine the clinical impact of dual-time-point imaging in the differential diagnosis of thyroid incidentalomas. Focal thyroid uptake was identified in 64 PET cases and final diagnosis was clarified with cytology/histology in a total of 27 patients with {sup 18}F-FDG-avid incidental thyroid lesion. The maximum standardized uptake value (SUVmax) of the initial image (SUV1) and SUVmax of the delayed image (SUV2) were determined, and the retention index (RI) was calculated by dividing the difference between SUV2 and SUV1 by SUV1 (i. e., RI=[SUV2-SUV1]/SUV1Χ100). These indices were compared between patient groups that were proven to have pathologically benign or malignant thyroid lesions. There was no statistically significant difference in SUV1 between benign and malignant lesions. SUV2 and RI of the malignant lesions were significantly higher than the benign lesions. The areas under the ROC curves showed that SUV2 and RI have the ability to discriminate between benign and malignant thyroid lesions. The predictability of dual-time-point PET parameters for thyroid malignancy was assessed by ROC curve analyses. When SUV2 of 3.9 was used as cut-off threshold, malignancy on the pathology could be predicted with a sensitivity of 87.5 % and specificity of 75 %. A thyroid lesion that shows RI greater than 12.5 % could be expected to be malignant (sensitivity 88.9 %, specificity 66.3 %). All malignant lesions showed an

  20. Recommendations for monitoring avian populations with point counts: a case study in southeastern Brazil

    Directory of Open Access Journals (Sweden)

    Vagner Cavarzere

    2013-01-01

    Full Text Available In the northern hemisphere, bird counts have been fundamental in gathering data to understand population trends. Due to the seasonality of the northern hemisphere, counts take place during two clearly defined moments in time: the breeding season (resident birds and winter (after migration. Depending on location, Neotropical birds may breed at any time of year, may or may not migrate, and those patterns are not necessarily synchronous among species. Also in contrast to the northern hemisphere, population trends and the impact of rapid urbanization and deforestation are unknown and unmonitored. Throughout one year, we used point counts to better understand temporal patterns of bird species richness and relative abundance in the state of São Paulo, southeastern Brazil, to examine how to implement similar bird counts in tropical America. We counted birds twice each day on 10 point transects (20 points day‑1, separated by 200 m, with a 100 m limited detection radius in a semideciduous tropical forest. Both species richness and bird abundance were greater in the morning, but accumulation curves suggest that longer-duration afternoon counts would reach the same total species as in morning counts. Species richness and bird abundance did not vary seasonally and unique species were counted every month; relatively few species (20% were present in all months. Most (84% known forest species in the area were encountered. We suggest that point counts can work here as they do in the northern hemisphere. We recommend that transects include at least 20 points and that the simplest timing of bird counts would also be seasonal, using timing of migration of austral migrants (and six months later to coordinate counts. We propose that bird counts in Brazil, and elsewhere in Latin America, would provide data to help understand population trends, but would require greater effort than in temperate latitudes due to greater species richness and different dynamics of

  1. From Contrapuntal Music to Polyphonic Novel: Aldous Huxley’s Point Counter Point

    Directory of Open Access Journals (Sweden)

    Mevlüde ZENGİN

    2015-06-01

    Full Text Available Taken at face value Point Counter Point (1928 written by Aldous Huxley seems to be a novel including many stories of various and sundry people and reflecting their points of view about the world in which they live and about the life they have been leading. However, it is this very quality of the novel that provides grounds for the study of the novel as a polyphonic one. The novel presents to its reader an aggregate of strikingly different characters and thus a broad spectrum of contemporary society. The characters in the novel are all characterized by and individualized with easily recognizable physical, intellectual, emotional, psychological and moral qualities. Each of them is well-contrived through their differences in social status, political views, wealth, etc. Thus, many different viewpoints, conflicting voices, contrasting insights and ideas are heard and seen synchronically in Point Counter Point, which makes it polyphonic. Polyphony is a musical motif referring to different notes and chords played at the same time to create a rhythm. It was first adopted by M. M. Bakhtin to analyze F. M. Dostoyevsky’s fiction. The aim of this study is firstly to elucidate, in Bakhtinian thought, polyphony and then dialogism and heteroglossia closely related to his concept of polyphony; and then to put the polyphonic qualities in Point Counter Point forth, studying the novel’s dialogism and heteroglot qualities

  2. Software Defined Networking Demands on Software Technologies

    DEFF Research Database (Denmark)

    Galinac Grbac, T.; Caba, Cosmin Marius; Soler, José

    2015-01-01

    Software Defined Networking (SDN) is a networking approach based on a centralized control plane architecture with standardised interfaces between control and data planes. SDN enables fast configuration and reconfiguration of the network to enhance resource utilization and service performances....... This new approach enables a more dynamic and flexible network, which may adapt to user needs and application requirements. To this end, systemized solutions must be implemented in network software, aiming to provide secure network services that meet the required service performance levels. In this paper......, we review this new approach to networking from an architectural point of view, and identify and discuss some critical quality issues that require new developments in software technologies. These issues we discuss along with use case scenarios. Here in this paper we aim to identify challenges...

  3. Real-time digital simulation of power electronics systems with Neutral Point Piloted multilevel inverter using FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Rakotozafy, Mamianja [Groupe de Recherches en Electrotechnique et Electronique de Nancy (GREEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); CONVERTEAM SAS, Parc d' activites Techn' hom, 24 avenue du Marechal Juin, BP 40437, 90008 Belfort Cedex (France); Poure, Philippe [Laboratoire d' Instrumentation Electronique de Nancy (LIEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); Saadate, Shahrokh [Groupe de Recherches en Electrotechnique et Electronique de Nancy (GREEN), Faculte des Sciences et Techniques, BP 70239, 54506 Vandoeuvre Cedex (France); Bordas, Cedric; Leclere, Loic [CONVERTEAM SAS, Parc d' activites Techn' hom, 24 avenue du Marechal Juin, BP 40437, 90008 Belfort Cedex (France)

    2011-02-15

    Most of actual real time simulation platforms have practically about ten microseconds as minimum calculation time step, mainly due to computation limits such as processing speed, architecture adequacy and modeling complexities. Therefore, simulation of fast switching converters' instantaneous models requires smaller computing time step. The approach presented in this paper proposes an answer to such limited modeling accuracies and computational bandwidth of the currently available digital simulators.As an example, the authors present a low cost, flexible and high performance FPGA-based real-time digital simulator for a complete complex power system with Neutral Point Piloted (NPP) three-level inverter. The proposed real-time simulator can model accurately and efficiently the complete power system, reducing costs, physical space and avoiding any damage to the actual equipment in the case of any dysfunction of the digital controller prototype. The converter model is computed at a small fixed time step as low as 100 ns. Such a computation time step allows high precision account of the gating signals and thus avoids averaging methods and event compensations. Moreover, a novel high performance model of the NPP three-level inverter has also been proposed for FPGA implementation. The proposed FPGA-based simulator models the environment of the NPP converter: the dc link, the RLE load and the digital controller and gating signals. FPGA-based real time simulation results are presented and compared with offline results obtained using PLECS software. They validate the efficiency and accuracy of the modeling for the proposed high performance FPGA-based real-time simulation approach. This paper also introduces new potential FPGA-based applications such as low cost real time simulator for power systems by developing a library of flexible and portable models for power converters, electrical machines and drives. (author)

  4. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals

    Directory of Open Access Journals (Sweden)

    Nathan Gold

    2018-01-01

    Full Text Available Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  5. Common fixed point theorems in intuitionistic fuzzy metric spaces and L-fuzzy metric spaces with nonlinear contractive condition

    International Nuclear Information System (INIS)

    Jesic, Sinisa N.; Babacev, Natasa A.

    2008-01-01

    The purpose of this paper is to prove some common fixed point theorems for a pair of R-weakly commuting mappings defined on intuitionistic fuzzy metric spaces [Park JH. Intuitionistic fuzzy metric spaces. Chaos, Solitons and Fractals 2004;22:1039-46] and L-fuzzy metric spaces [Saadati R, Razani A, Adibi H. A common fixed point theorem in L-fuzzy metric spaces. Chaos, Solitons and Fractals, doi:10.1016/j.chaos.2006.01.023], with nonlinear contractive condition, defined with function, first observed by Boyd and Wong [Boyd DW, Wong JSW. On nonlinear contractions. Proc Am Math Soc 1969;20:458-64]. Following Pant [Pant RP. Common fixed points of noncommuting mappings. J Math Anal Appl 1994;188:436-40] we define R-weak commutativity for a pair of mappings and then prove the main results. These results generalize some known results due to Saadati et al., and Jungck [Jungck G. Commuting maps and fixed points. Am Math Mon 1976;83:261-3]. Some examples and comments according to the preceding results are given

  6. Fixed Points on the Real numbers without the Equality Test

    DEFF Research Database (Denmark)

    Korovina, Margarita

    2002-01-01

    In this paper we present a study of definability properties of fixed points of effective operators on the real numbers without the equality test. In particular we prove that Gandy theorem holds for the reals without the equality test. This provides a useful tool for dealing with recursive...

  7. Time diary and questionnaire assessment of factors associated with academic and personal success among university undergraduates.

    Science.gov (United States)

    George, Darren; Dixon, Sinikka; Stansal, Emory; Gelb, Shannon Lund; Pheri, Tabitha

    2008-01-01

    A sample of 231 students attending a private liberal arts university in central Alberta, Canada, completed a 5-day time diary and a 71-item questionnaire assessing the influence of personal, cognitive, and attitudinal factors on success. The authors used 3 success measures: cumulative grade point average (GPA), Personal Success--each participant's rating of congruence between stated goals and progress toward those goals--and Total Success--a measure that weighted GPA and Personal Success equally. The greatest predictors of GPA were time-management skills, intelligence, time spent studying, computer ownership, less time spent in passive leisure, and a healthy diet. Predictors of Personal Success scores were clearly defined goals, overall health, personal spirituality, and time-management skills. Predictors of Total Success scores were clearly defined goals, time-management skills, less time spent in passive leisure, healthy diet, waking up early, computer ownership, and less time spent sleeping. Results suggest alternatives to traditional predictors of academic success.

  8. Repeated diffusion MRI reveals earliest time point for stratification of radiotherapy response in brain metastases

    DEFF Research Database (Denmark)

    Mahmood, Faisal; Johannesen, Helle H; Geertsen, Poul

    2017-01-01

    An imaging biomarker for early prediction of treatment response potentially provides a non-invasive tool for better prognostics and individualized management of the disease. Radiotherapy (RT) response is generally related to changes in gross tumor volume manifesting months later. In this prospect......An imaging biomarker for early prediction of treatment response potentially provides a non-invasive tool for better prognostics and individualized management of the disease. Radiotherapy (RT) response is generally related to changes in gross tumor volume manifesting months later....... In this prospective study we investigated the apparent diffusion coefficient (ADC), perfusion fraction and pseudo diffusion coefficient derived from diffusion weighted MRI as potential early biomarkers for radiotherapy response of brain metastases. It was a particular aim to assess the optimal time point...

  9. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    Energy Technology Data Exchange (ETDEWEB)

    Sachpazidis, Ilias [Fraunhofer Institute for Computer Graphics, Fraunhoferstr. 5, D-64283, Darmstadt (Germany)]. E-mail: Ilias.Sachpazidis@igd.fraunhofer.de; Ohl, Roland [MedCom Gesellschaft fuer medizinische Bildverarbeitung mbH, Runderturmstr. 12, D-64283, Darmstadt (Germany); Kontaxakis, George [Universidad Politecnica de Madrid, ETSI Telecomunicacion, Madrid 28040 (Spain); Sakas, Georgios [Fraunhofer Institute for Computer Graphics, Fraunhoferstr. 5, D-64283, Darmstadt (Germany)

    2006-12-20

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber.

  10. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    International Nuclear Information System (INIS)

    Sachpazidis, Ilias; Ohl, Roland; Kontaxakis, George; Sakas, Georgios

    2006-01-01

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber

  11. TeleHealth networks: Instant messaging and point-to-point communication over the internet

    Science.gov (United States)

    Sachpazidis, Ilias; Ohl, Roland; Kontaxakis, George; Sakas, Georgios

    2006-12-01

    This paper explores the advantages and disadvantages of a medical network based on point-to-point communication and a medical network based on Jabber instant messaging protocol. Instant messaging might be, for many people, a convenient way of chatting over the Internet. We will attempt to illustrate how an instant messaging protocol could serve in the best way medical services and provide great flexibility to the involved parts. Additionally, the directory services and presence status offered by the Jabber protocol make it very attractive to medical applications that need to have real time and store and forward communication. Furthermore, doctors connected to Internet via high-speed networks could benefit by saving time due to the data transmission acceleration over Jabber.

  12. Derivation of Pal-Bell equations for two-point reactors, and its application to correlation measurements at KUCA

    International Nuclear Information System (INIS)

    Murata, Naoyuki; Yamane, Yoshihiro; Nishina, Kojiro; Shiroya, Seiji; Kanda, Keiji.

    1980-01-01

    A probability is defined for an event in which m neutrons exist at time t sub(f) in core I of a coupled-core system, originating from a neutron injected into the core I at an earlier time t; we call it P sub(I,I,m)(t sub(f)/t). Similarly, P sub(I,II,m)(t sub(f)/t) is defined as the probability for m neutrons to exist in core II of the system at time t sub(f), originating from a neutron injected into the core I at time t. Then a system of coupled equations are derived for the generating functions G sub(Ij)(z, t sub(f)/t) = μP sub(Ijm)(t sub(f)/t).z sup(m), where j = I, II. By similar procedures equations are derived for the generating functions associated with joint probability of the following events: a given combination of numbers of neutrons are detected during given series of detection time intervals by a detector inserted in one of the cores. The above two kinds of systems of equations can be regarded as a two-point version of Pal-Bell's equations. As the application of these formulations, analyzing formula for correlation measurements, namely (1) Feynman-alpha experiment and (2) Rossi-alpha experiment of Orndoff-type, are derived, and their feasibility is verified by experiments carried out at KUCA. (author)

  13. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  14. Legibility Evaluation Using Point-of-regard Measurement

    Science.gov (United States)

    Saito, Daisuke; Saito, Keiichi; Saito, Masao

    Web site visibility has become important because of the rapid dissemination of World Wide Web, and combinations of foreground and background colors are crucial in providing high visibility. In our previous studies, the visibilities of several web-safe color combinations were examined using a psychological method. In those studies, simple stimuli were used because of experimental restriction. In this paper, legibility of sentences on web sites was examined using a psychophisiological method, point-of-regard measurement, to obtain other practical data. Ten people with normal color sensations ranging from ages 21 to 29 were recruited. The number of characters per line in each page was arranged in the same number, and the four representative achromatic web-safe colors, that is, #000000, #666666, #999999 and #CCCCCC, were examined. The reading time per character and the gaze time per line were obtained from point-of-regard measurement, and the normalized with the reading time and the gaze time of the three colors were calculated and compared. As the results, it was shown that the time of reading and gaze become long at the same ratio when the contrast decreases by point-of-regard measurement. Therefore, it was indicated that the legibility of color combinations could be estimated by point-of-regard measurement.

  15. Defining biochemical recurrence after radical prostatectomy and timing of early salvage radiotherapy. Informing the debate

    International Nuclear Information System (INIS)

    Budaeus, Lars; Schiffmann, Jonas; Graefen, Markus; Huland, Hartwig; Tennstedt, Pierre; Siegmann, Alessandra; Boehmer, Dirk; Budach, Volker; Bartkowiak, Detlef; Wiegel, Thomas

    2017-01-01

    The optimal prostate-specific antigen (PSA) level after radical prostatectomy (RP) for defining biochemical recurrence and initiating salvage radiation therapy (SRT) is still debatable. Whereas adjuvant or extremely early SRT irrespective of PSA progression might be overtreatment for some patients, SRT at PSA >0.2 ng/ml might be undertreatment for others. The current study addresses the optimal timing of radiation therapy after RP. Cohort 1 comprised 293 men with PSA 0.1-0.19 ng/ml after RP. Cohort 2 comprised 198 men with SRT. PSA progression and metastases were assessed in cohort 1. In cohort 2, we compared freedom from progression according to pre-SRT PSA (0.03-0.19 vs. 0.2-0.499 ng/ml). Multivariable Cox regression analyses predicted progression after SRT. In cohort 1, 281 (95.9%) men had further PSA progression ≥0.2 ng/ml and 27 (9.2%) men developed metastases within a median follow-up of 74.3 months. In cohort 2, we recorded improved freedom from progression according to lower pre-SRT PSA (0.03-0.19 vs. 0.2-0.499 ng/ml: 69 vs. 53%; log-rank p = 0.051). Patients with higher pre-SRT PSA ≥0.2 ng/ml were at a higher risk of progression after SRT (hazard ratio: 1.8; p [de

  16. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Science.gov (United States)

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  17. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2011-09-01

    Full Text Available Due to their weak received signal power, Global Positioning System (GPS signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs. However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU coupled with a new generation Graphics Processing Unit (GPU having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  18. Real-Time Tropospheric Delay Estimation using IGS Products

    Science.gov (United States)

    Stürze, Andrea; Liu, Sha; Söhne, Wolfgang

    2014-05-01

    The Federal Agency for Cartography and Geodesy (BKG) routinely provides zenith tropospheric delay (ZTD) parameter for the assimilation in numerical weather models since more than 10 years. Up to now the results flowing into the EUREF Permanent Network (EPN) or E-GVAP (EUMETNET EIG GNSS water vapour programme) analysis are based on batch processing of GPS+GLONASS observations in differential network mode. For the recently started COST Action ES1206 about "Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate" (GNSS4SWEC), however, rapid updates in the analysis of the atmospheric state for nowcasting applications require changing the processing strategy towards real-time. In the RTCM SC104 (Radio Technical Commission for Maritime Services, Special Committee 104) a format combining the advantages of Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) is under development. The so-called State Space Representation approach is defining corrections, which will be transferred in real-time to the user e.g. via NTRIP (Network Transport of RTCM via Internet Protocol). Meanwhile messages for precise orbits, satellite clocks and code biases compatible to the basic PPP mode using IGS products are defined. Consequently, the IGS Real-Time Service (RTS) was launched in 2013 in order to extend the well-known precise orbit and clock products by a real-time component. Further messages e.g. with respect to ionosphere or phase biases are foreseen. Depending on the level of refinement, so different accuracies up to the RTK level shall be reachable. In co-operation of BKG and the Technical University of Darmstadt the real-time software GEMon (GREF EUREF Monitoring) is under development. GEMon is able to process GPS and GLONASS observation and RTS product data streams in PPP mode. Furthermore, several state-of-the-art troposphere models, for example based on numerical weather prediction data, are implemented. Hence, it

  19. Point Measurements of Fermi Velocities by a Time-of-Flight Method

    DEFF Research Database (Denmark)

    Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt

    1972-01-01

    The present paper describes in detail a new method of obtaining information about the Fermi velocity of electrons in metals, point by point, along certain contours on the Fermi surface. It is based on transmission of microwaves through thin metal slabs in the presence of a static magnetic field...... applied parallel to the surface. The electrons carry the signal across the slab and arrive at the second surface with a phase delay which is measured relative to a reference signal; the velocities are derived by analyzing the magnetic field dependence of the phase delay. For silver we have in this way...... obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...

  20. Near Real Time Change-Point detection in Optical and Thermal Infrared Time Series Using Bayesian Inference over the Dry Chaco Forest

    Science.gov (United States)

    Barraza Bernadas, V.; Grings, F.; Roitberg, E.; Perna, P.; Karszenbaum, H.

    2017-12-01

    The Dry Chaco region (DCF) has the highest absolute deforestation rates of all Argentinian forests. The most recent report indicates a current deforestation rate of 200,000 Ha year-1. In order to better monitor this process, DCF was chosen to implement an early warning program for illegal deforestation. Although the area is intensively studied using medium resolution imagery (Landsat), the products obtained have a yearly pace and therefore unsuited for an early warning program. In this paper, we evaluated the performance of an online Bayesian change-point detection algorithm for MODIS Enhanced Vegetation Index (EVI) and Land Surface Temperature (LST) datasets. The goal was to to monitor the abrupt changes in vegetation dynamics associated with deforestation events. We tested this model by simulating 16-day EVI and 8-day LST time series with varying amounts of seasonality, noise, length of the time series and by adding abrupt changes with different magnitudes. This model was then tested on real satellite time series available through the Google Earth Engine, over a pilot area in DCF, where deforestation was common in the 2004-2016 period. A comparison with yearly benchmark products based on Landsat images is also presented (REDAF dataset). The results shows the advantages of using an automatic model to detect a changepoint in the time series than using only visual inspection techniques. Simulating time series with varying amounts of seasonality and noise, and by adding abrupt changes at different times and magnitudes, revealed that this model is robust against noise, and is not influenced by changes in amplitude of the seasonal component. Furthermore, the results compared favorably with REDAF dataset (near 65% of agreement). These results show the potential to combine LST and EVI to identify deforestation events. This work is being developed within the frame of the national Forest Law for the protection and sustainable development of Native Forest in Argentina in

  1. A Real-Time GPP Software-Defined Radio Testbed for the Physical Layer of Wireless Standards

    NARCIS (Netherlands)

    Schiphorst, Roelof; Hoeksema, F.W.; Slump, Cornelis H.

    2005-01-01

    We present our contribution to the general-purpose-processor-(GPP)-based radio. We describe a baseband software-defined radio testbed for the physical layer of wireless LAN standards. All physical layer functions have been successfully mapped on a Pentium 4 processor that performs these functions in

  2. Big Rock Point severe accident management strategies

    International Nuclear Information System (INIS)

    Brogan, B.A.; Gabor, J.R.

    1996-01-01

    December 1994, the Nuclear Energy Institute (NEI) issued guidance relative to the formal industry position on Severe Accident Management (SAM) approved by the NEI Strategic Issues Advisory Committee on November 4, 1994. This paper summarizes how Big Rock Point (BRP) has and continues to address SAM strategies. The historical accounting portion of this presentation includes a description of how the following projects identified and defined the current Big Rock Point SAM strategies: the 1981 Level 3 Probabilistic Risk Assessment performance; the development of the Plant Specific Technical Guidelines from which the symptom oriented Emergency Operating Procedures (EOPs) were developed; the Control Room Design Review; and, the recent completion of the Individual Plant Evaluation (IPE). In addition to the historical presentation deliberation, this paper the present activities that continue to stress SAM strategies

  3. Duan's fixed point theorem: proof and generalization

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available Let X be an H-space of the homotopy type of a connected, finite CW-complex, f:X→X any map and p k :X→X the k th power map. Duan proved that p k f :X→X has a fixed point if k≥2 . We give a new, short and elementary proof of this. We then use rational homotopy to generalize to spaces X whose rational cohomology is the tensor product of an exterior algebra on odd dimensional generators with the tensor product of truncated polynomial algebras on even dimensional generators. The role of the power map is played by a θ -structure μ θ :X→X as defined by Hemmi-Morisugi-Ooshima. The conclusion is that μ θ f and f μ θ each has a fixed point.

  4. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  5. Inequalities for trace anomalies, length of the RG flow, distance between the fixed points and irreversibility

    International Nuclear Information System (INIS)

    Anselmi, Damiano

    2004-01-01

    I discuss several issues about the irreversibility of the RG flow and the trace anomalies c, a and a'. First I argue that in quantum field theory: (i) the scheme-invariant area Δ a' of the graph of the effective beta function between the fixed points defines the length of the RG flow; (ii) the minimum of Δ a' in the space of flows connecting the same UV and IR fixed points defines the (oriented) distance between the fixed points and (iii) in even dimensions, the distance between the fixed points is equal to Δ a = a UV - a IR . In even dimensions, these statements imply the inequalities 0 ≤ Δ a ≤ Δ a' and therefore the irreversibility of the RG flow. Another consequence is the inequality a ≤ c for free scalars and fermions (but not vectors), which can be checked explicitly. Secondly, I elaborate a more general axiomatic set-up where irreversibility is defined as the statement that there exist no pairs of non-trivial flows connecting interchanged UV and IR fixed points. The axioms, based on the notions of length of the flow, oriented distance between the fixed points and certain 'oriented-triangle inequalities', imply the irreversibility of the RG flow without a global a function. I conjecture that the RG flow is also irreversible in odd dimensions (without a global a function). In support of this, I check the axioms of irreversibility in a class of d = 3 theories where the RG flow is integrable at each order of the large N expansion

  6. Identifying populations at risk from environmental contamination from point sources

    OpenAIRE

    Williams, F; Ogston, S

    2002-01-01

    Objectives: To compare methods for defining the population at risk from a point source of air pollution. A major challenge for environmental epidemiology lies in correctly identifying populations at risk from exposure to environmental pollutants. The complexity of today's environment makes it essential that the methods chosen are accurate and sensitive.

  7. A Gestalt Point of View on Facilitating Growth in Counseling

    Science.gov (United States)

    Harman, Robert L.

    1975-01-01

    If counselors are to be facilitators of client growth, it would seem essentail that they become familiar with the concept of growth and ways to facilitate it. The author defines growth from a gestalt therapy point of view and provides techniques and examples of ways to facilitate client growth. (Author)

  8. Clinically Relevant Cut-off Points for the Diagnosis of Sarcopenia in Older Korean People.

    Science.gov (United States)

    Choe, Yu-Ri; Joh, Ju-Youn; Kim, Yeon-Pyo

    2017-11-09

    The optimal criteria applied to older Korean people have not been defined. We aimed to define clinically relevant cut-off points for older Korean people and to compare the predictive validity with other definitions of sarcopenia. Nine hundred and sixteen older Koreans (≥65 years) were included in this cross-sectional observational study. We used conditional inference tree analysis to determine cut-off points for height-adjusted grip strength (GS) and appendicular skeletal muscle mass (ASM), for use in the diagnosis of sarcopenia. We then compared the Korean sarcopenia criteria with the Foundation for the National Institutes of Health and Asian Working Group for Sarcopenia criteria, using frailty, assessed with the Korean Frailty Index, as an outcome variable. For men, a residual GS (GSre) of ≤ 0.25 was defined as weak, and a residual ASM (ASMre) of ≤ 1.29 was defined as low. Corresponding cut-off points for women were a GSre of ≤ 0.17 and an ASMre of ≤ 0.69. GSre and ASMre values were adjusted for height. In logistic regression analysis with new cut-off points, the adjusted odds ratios for pre-frail or frail status in the sarcopenia group were 3.23 (95% confidence interval [CI] 1.33-7.83) for the men and 1.74 (95% CI 0.91-3.35) for the women. In receiver operating characteristic curve analysis, the unadjusted area under the curve for Korean sarcopenia criteria in men and women were 0.653 and 0.608, respectively (p sarcopenia in older Korean people. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Interactive-cut: Real-time feedback segmentation for translational research.

    Science.gov (United States)

    Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher

    2014-06-01

    In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  11. Dual time point 2-deoxy-2-[18F]fluoro-D-glucose PET/CT: nodal staging in locally advanced breast cancer.

    Science.gov (United States)

    García Vicente, A M; Soriano Castrejón, A; Cruz Mora, M Á; Ortega Ruiperez, C; Espinosa Aunión, R; León Martín, A; González Ageitos, A; Van Gómez López, O

    2014-01-01

    To assess dual time point 2-deoxy-2-[(18)F]fluoro-D-glucose (18)(F)FDG PET-CT accuracy in nodal staging and in detection of extra-axillary involvement. Dual time point [(18)F] FDG PET/CT scan was performed in 75 patients. Visual and semiquantitative assessment of lymph nodes was performed. Semiquantitative measurement of SUV and ROC-analysis were carried out to calculate SUV(max) cut-off value with the best diagnostic performance. Axillary and extra-axillary lymph node chains were evaluated. Sensitivity and specificity of visual assessment was 87.3% and 75%, respectively. SUV(max) values with the best sensitivity were 0.90 and 0.95 for early and delayed PET, respectively. SUV(max) values with the best specificity were 1.95 and 2.75, respectively. Extra-axillary lymph node involvement was detected in 26.7%. FDG PET/CT detected extra-axillary lymph node involvement in one-fourth of the patients. Semiquantitative lymph node analysis did not show any advantage over the visual evaluation. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.

  12. Point Pollution Sources Dimensioning

    Directory of Open Access Journals (Sweden)

    Georgeta CUCULEANU

    2011-06-01

    Full Text Available In this paper a method for determining the main physical characteristics of the point pollution sources is presented. It can be used to find the main physical characteristics of them. The main physical characteristics of these sources are top inside source diameter and physical height. The top inside source diameter is calculated from gas flow-rate. For reckoning the physical height of the source one takes into account the relation given by the proportionality factor, defined as ratio between the plume rise and physical height of the source. The plume rise depends on the gas exit velocity and gas temperature. That relation is necessary for diminishing the environmental pollution when the production capacity of the plant varies, in comparison with the nominal one.

  13. Fabrications, Time-Consuming Bureaucracy and Moral Dilemmas--Finnish University Employees' Experiences on the Governance of University Work

    Science.gov (United States)

    Jauhiainen, Arto; Jauhiainen, Annukka; Laiho, Anne; Lehto, Reeta

    2015-01-01

    This article explores how the university workers of two Finnish universities experienced the range of neoliberal policymaking and governance reforms implemented in the 2000s. These reforms include quality assurance, system of defined annual working hours, outcome-based salary system and work time allocation system. Our point of view regarding…

  14. Ordinary differential equation for local accumulation time.

    Science.gov (United States)

    Berezhkovskii, Alexander M

    2011-08-21

    Cell differentiation in a developing tissue is controlled by the concentration fields of signaling molecules called morphogens. Formation of these concentration fields can be described by the reaction-diffusion mechanism in which locally produced molecules diffuse through the patterned tissue and are degraded. The formation kinetics at a given point of the patterned tissue can be characterized by the local accumulation time, defined in terms of the local relaxation function. Here, we show that this time satisfies an ordinary differential equation. Using this equation one can straightforwardly determine the local accumulation time, i.e., without preliminary calculation of the relaxation function by solving the partial differential equation, as was done in previous studies. We derive this ordinary differential equation together with the accompanying boundary conditions and demonstrate that the earlier obtained results for the local accumulation time can be recovered by solving this equation. © 2011 American Institute of Physics

  15. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  16. The Dynamics of Smoking-Related Disturbed Methylation: A Two Time-Point Study of Methylation Change in Smokers, Non-Smokers and Former Smokers

    Science.gov (United States)

    BACKGROUND: The evidence for epigenome-wide associations between smoking and DNA methylation continues to grow through cross-sectional studies. However, few large­ scale investigations have explored the associations using observations for individuals at multiple time-points. ...

  17. Process algebra with timing : real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.; Bergstra, J.A.; Ponse, A.J.; Smolka, S.A.

    2001-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The starting-point is a new real time version with absolute timing, called ACPsat, featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  18. A General Schema for Constructing One-Point Bases in the Lambda Calculus

    DEFF Research Database (Denmark)

    Goldberg, Mayer

    2001-01-01

    In this paper, we present a schema for constructing one-point bases for recursively enumerable sets of lambda terms. The novelty of the approach is that we make no assumptions about the terms for which the one-point basis is constructed: They need not be combinators and they may contain constants...... and free variables. The significance of the construction is twofold: In the context of the lambda calculus, it characterises one-point bases as ways of ``packaging'' sets of terms into a single term; And in the context of realistic programming languages, it implies that we can define a single procedure...

  19. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    Science.gov (United States)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  20. Preparing Attitude Scale to Define Students' Attitudes about Environment, Recycling, Plastic and Plastic Waste

    Science.gov (United States)

    Avan, Cagri; Aydinli, Bahattin; Bakar, Fatma; Alboga, Yunus

    2011-01-01

    The aim of this study is to introduce an attitude scale in order to define students? attitudes about environment, recycling, plastics, plastic waste. In this study, 80 attitude sentences according to 5-point Likert-type scale were prepared and applied to 492 students of 6th grade in the Kastamonu city center of Turkey. The scale consists of…

  1. Accelerometer Cut Points for Physical Activity Assessment of Older Adults with Parkinson's Disease.

    Directory of Open Access Journals (Sweden)

    Håkan Nero

    Full Text Available To define accelerometer cut points for different walking speeds in older adults with mild to moderate Parkinson's disease.A volunteer sample of 30 older adults (mean age 73; SD 5.4 years with mild to moderate Parkinson's disease walked at self-defined brisk, normal, and slow speeds for three minutes in a circular indoor hallway, each wearing an accelerometer around the waist. Walking speed was calculated and used as a reference measure. Through ROC analysis, accelerometer cut points for different levels of walking speed in counts per 15 seconds were generated, and a leave-one-out cross-validation was performed followed by a quadratic weighted Cohen's Kappa, to test the level of agreement between true and cut point-predicted walking speeds.Optimal cut points for walking speeds ≤ 1.0 m/s were ≤ 328 and ≤ 470 counts/15 sec; for speeds > 1.3 m/s, they were ≥ 730 and ≥ 851 counts/15 sec for the vertical axis and vector magnitude, respectively. Sensitivity and specificity were 61%-100% for the developed cut points. The quadratic weighted Kappa showed substantial agreement: κ = 0.79 (95% CI 0.70-0.89 and κ = 0.69 (95% CI 0.56-0.82 for the vertical axis and the vector magnitude, respectively.This study provides accelerometer cut points based on walking speed for physical-activity measurement in older adults with Parkinson's disease for evaluation of interventions and for investigating links between physical activity and health.

  2. The Geological Grading Scale: Every million Points Counts!

    Science.gov (United States)

    Stegman, D. R.; Cooper, C. M.

    2006-12-01

    The concept of geological time, ranging from thousands to billions of years, is naturally quite difficult for students to grasp initially, as it is much longer than the timescales over which they experience everyday life. Moreover, universities operate on a few key timescales (hourly lectures, weekly assignments, mid-term examinations) to which students' maximum attention is focused, largely driven by graded assessment. The geological grading scale exploits the overwhelming interest students have in grades as an opportunity to instill familiarity with geological time. With the geological grading scale, the number of possible points/marks/grades available in the course is scaled to 4.5 billion points --- collapsing the entirety of Earth history into one semester. Alternatively, geological time can be compressed into each assignment, with scores for weekly homeworks not worth 100 points each, but 4.5 billion! Homeworks left incomplete with questions unanswered lose 100's of millions of points - equivalent to missing the Paleozoic era. The expected quality of presentation for problem sets can be established with great impact in the first week by docking assignments an insignificant amount points for handing in messy work; though likely more points than they've lost in their entire schooling history combined. Use this grading scale and your students will gradually begin to appreciate exactly how much time represents a geological blink of the eye.

  3. Dynamic magnetic x-points

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Tajima, T.; Dawson, J.M.

    1981-03-01

    Two-and-one-half dimensional magnetostatic and electromagnetic particle simulations of time-varying magnetic x-points and the associated plasma response are reported. The stability and topology depend on the crossing angle of the field lines at the x-point, irrespective of the plasma β. The electrostatic field and finite Larmor radius effects play an important role in current penetration and shaping of the plasma flow. The snapping of the field lines, and dragging of the plasma into, and confinement of the plasma at, an o-point (magnetic island) is observed. Magnetic island coalescence with explosive growth of the coalescence mode occurs and is accompanied by a large increase of kinetic energy and temperature as well as the formation of hot tails on the distribution functions

  4. Can play be defined?

    DEFF Research Database (Denmark)

    Eichberg, Henning

    2015-01-01

    Can play be defined? There is reason to raise critical questions about the established academic demand that at phenomenon – also in humanist studies – should first of all be defined, i.e. de-lineated and by neat lines limited to a “little box” that can be handled. The following chapter develops....... Human beings can very well understand play – or whatever phenomenon in human life – without defining it....

  5. On stability of fixed points and chaos in fractional systems

    Science.gov (United States)

    Edelman, Mark

    2018-02-01

    In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0 logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.

  6. Real-time services in IP network architectures

    Science.gov (United States)

    Gilardi, Antonella

    1996-12-01

    The worldwide internet system seems to be the success key for the provision of real time multimedia services to both residential and business users and someone says that in such a way broadband networks will have a reason to exist. This new class of applications that use multiple media (voice, video and data) impose constraints to the global network nowadays consisting of subnets with various data links. The attention will be focused on the interconnection of IP non ATM and ATM networks. IETF and ATM forum are currently involved in the developing specifications suited to adapt the connectionless IP protocol to the connection oriented ATM protocol. First of all the link between the ATM and the IP service model has to be set in order to match the QoS and traffic requirements defined in the relative environment. A further significant topic is represented by the mapping of IP resource reservation model onto the ATM signalling and in the end it is necessary to define how the routing works when there are QoS parameters associated. This paper, considering only unicast applications, will examine the above issues taking as a starting point the situation where an host launches as call set up request with the relevant QoS and traffic descriptor and at some point a router at the edge of the ATM network has to decide how forwarding and request in order to establish an end to end link with the right capabilities. The aim is to compare the proposals emerging from different standard bodies to point out convergency or incompatibility.

  7. Evaluation of articular cartilage in patients with femoroacetabular impingement (FAI) using T2* mapping at different time points at 3.0 Tesla MRI: a feasibility study.

    Science.gov (United States)

    Apprich, S; Mamisch, T C; Welsch, G H; Bonel, H; Siebenrock, K A; Kim, Y-J; Trattnig, S; Dudda, M

    2012-08-01

    To define the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration prior to surgery in patients with symptomatic femoroacetabular impingement (FAI), we compared cartilage of the hip joint in patients with FAI and healthy volunteers using T2* mapping at 3.0 Tesla over time. Twenty-two patients (13 females and 9 males; mean age 28.1 years) with clinical signs of FAI and Tönnis grade ≤ 1 on anterior-posterior x-ray and 35 healthy age-matched volunteers were examined at a 3 T MRI using a flexible body coil. T2* maps were calculated from sagittal- and coronal-oriented gradient-multi-echo sequences using six echoes (TR 125, TE 4.41/8.49/12.57/16.65/20.73/24.81, scan time 4.02 min), both measured at beginning and end of the scan (45 min time span between measurements). Region of interest analysis was manually performed on four consecutive slices for superior and anterior cartilage. Mean T2* values were compared among patients and volunteers, as well as over time using analysis of variance and Student's t-test. Whereas quantitative T2* values for the first measurement did not reveal significant differences between patients and volunteers, either for sagittal (p = 0.644) or coronal images (p = 0.987), at the first measurement, a highly significant difference (p ≤ 0.004) was found for both measurements with time after unloading of the joint. Over time we found decreasing mean T2* values for patients, in contrast to increasing mean T2* relaxation times in volunteers. The study proved the feasibility of utilizing T2* mapping for assessment of early cartilage degeneration in the hip joint in FAI patients at 3 Tesla to predict possible success of joint-preserving surgery. However, we suggest the time point for measuring T2* as an MR biomarker for cartilage and the changes in T2* over time to be of crucial importance for designing an MR protocol in patients with FAI.

  8. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  9. Performance of Coarse Timing Synchronization in Orthogonal Frequency Division Multiplexing System

    Science.gov (United States)

    Bruno; Xuping, Zhai

    2018-01-01

    A novel symbol timing synchronization based on training symbols for OFDM system was designed and presented in this paper. The performance of the proposed method is tested in terms of the timing metric and mean square error (MSE), indoor no-noise, and Rayleigh fading channel, obtained by simulation. The over the air transmission of the proposed timing metric is evaluated by implementing a software defined radio based on the GNU Radio and universal software radio peripheral communication platforms. The simulation results show that the proposed timing metric indexed at the correct time point, which has smaller MSE than others methods, particularly at high signal-to-noise ratio. Therefore the proposed method works well, even in transmission through the air indoor laboratory environment.

  10. Defining Documentary Film

    DEFF Research Database (Denmark)

    Juel, Henrik

    2006-01-01

    A discussion of various attemts at defining documentary film regarding form, content, truth, stile, genre or reception - and a propoposal of a positive list of essential, but non-exclusive characteristica of documentary film......A discussion of various attemts at defining documentary film regarding form, content, truth, stile, genre or reception - and a propoposal of a positive list of essential, but non-exclusive characteristica of documentary film...

  11. DALI: Defining Antibiotic Levels in Intensive care unit patients: a multi-centre point of prevalence study to determine whether contemporary antibiotic dosing for critically ill patients is therapeutic.

    Science.gov (United States)

    Roberts, Jason A; De Waele, Jan J; Dimopoulos, George; Koulenti, Despoina; Martin, Claude; Montravers, Philippe; Rello, Jordi; Rhodes, Andrew; Starr, Therese; Wallis, Steven C; Lipman, Jeffrey

    2012-07-06

    The clinical effects of varying pharmacokinetic exposures of antibiotics (antibacterials and antifungals) on outcome in infected critically ill patients are poorly described. A large-scale multi-centre study (DALI Study) is currently underway describing the clinical outcomes of patients achieving pre-defined antibiotic exposures. This report describes the protocol. DALI will recruit over 500 patients administered a wide range of either beta-lactam or glycopeptide antibiotics or triazole or echinocandin antifungals in a pharmacokinetic point-prevalence study. It is anticipated that over 60 European intensive care units (ICUs) will participate. The primary aim will be to determine whether contemporary antibiotic dosing for critically ill patients achieves plasma concentrations associated with maximal activity. Secondary aims will compare antibiotic pharmacokinetic exposures with patient outcome and will describe the population pharmacokinetics of the antibiotics included. Various subgroup analyses will be conducted to determine patient groups that may be at risk of very low or very high concentrations of antibiotics. The DALI study should inform clinicians of the potential clinical advantages of achieving certain antibiotic pharmacokinetic exposures in infected critically ill patients.

  12. Fingerprinting Software Defined Networks and Controllers

    Science.gov (United States)

    2015-03-01

    rps requests per second RTT Round-Trip Time SDN Software Defined Networking SOM Self-Organizing Map STP Spanning Tree Protocol TRW-CB Threshold Random...Protocol ( STP ) updates), in which case the frame will be “punted” from the forwarding lookup process and processed by the route processor [9]. The act of...environment 20 to accomplish the needs of B4. In addition to Google, the SDN market is expected to grow beyond $35 billion by April 2018 [31]. The rate

  13. 50 CFR 660.392 - Latitude/longitude coordinates defining the 50 fm (91 m) through 75 fm (137 m) depth contours.

    Science.gov (United States)

    2010-10-01

    ... contour between the U.S. border with Canada and the U.S. border with Mexico is defined by straight lines...′ W. long. (b) The 50-fm (91-m) depth contour between the U.S. border with Canada and the Swiftsure Bank is defined by straight lines connecting all of the following points in the order stated: (1) 48°30...

  14. Algorithm for determining two-periodic steady-states in AC machines directly in time domain

    Directory of Open Access Journals (Sweden)

    Sobczyk Tadeusz J.

    2016-09-01

    Full Text Available This paper describes an algorithm for finding steady states in AC machines for the cases of their two-periodic nature. The algorithm enables to specify the steady-state solution identified directly in time domain despite of the fact that two-periodic waveforms are not repeated in any finite time interval. The basis for such an algorithm is a discrete differential operator that specifies the temporary values of the derivative of the two-periodic function in the selected set of points on the basis of the values of that function in the same set of points. It allows to develop algebraic equations defining the steady state solution reached in a chosen point set for the nonlinear differential equations describing the AC machines when electrical and mechanical equations should be solved together. That set of those values allows determining the steady state solution at any time instant up to infinity. The algorithm described in this paper is competitive with respect to the one known in literature an approach based on the harmonic balance method operated in frequency domain.

  15. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    Science.gov (United States)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  16. [Hypothesis on the equilibrium point and variability of amplitude, speed and time of single-joint movement].

    Science.gov (United States)

    Latash, M; Gottleib, G

    1990-01-01

    Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.

  17. Defining Quantum Control Flow

    OpenAIRE

    Ying, Mingsheng; Yu, Nengkun; Feng, Yuan

    2012-01-01

    A remarkable difference between quantum and classical programs is that the control flow of the former can be either classical or quantum. One of the key issues in the theory of quantum programming languages is defining and understanding quantum control flow. A functional language with quantum control flow was defined by Altenkirch and Grattage [\\textit{Proc. LICS'05}, pp. 249-258]. This paper extends their work, and we introduce a general quantum control structure by defining three new quantu...

  18. Metal-oxide-junction, triple point cathodes in a relativistic magnetron

    International Nuclear Information System (INIS)

    Jordan, N. M.; Gilgenbach, R. M.; Hoff, B. W.; Lau, Y. Y.

    2008-01-01

    Triple point, defined as the junction of metal, dielectric, and vacuum, is the location where electron emission is favored in the presence of a sufficiently strong electric field. To exploit triple point emission, metal-oxide-junction (MOJ) cathodes consisting of dielectric ''islands'' over stainless steel substrates have been fabricated. The two dielectrics used are hafnium oxide (HfO x ) for its high dielectric constant and magnesium oxide (MgO) for its high secondary electron emission coefficient. The coatings are deposited by ablation-plasma-ion lithography using a KrF laser (0-600 mJ at 248 nm) and fluence ranging from 3 to 40 J/cm 2 . Composition and morphology of deposited films are analyzed by scanning electron microscopy coupled with x-ray energy dispersive spectroscopy, as well as x-ray diffraction. Cathodes are tested on the Michigan Electron Long-Beam Accelerator with a relativistic magnetron, at parameters V=-300 kV, I=1-15 kA, and pulse lengths of 0.3-0.5 μs. Six variations of the MOJ cathode are tested, and are compared against five baseline cases. It is found that particulate formed during the ablation process improves the electron emission properties of the cathodes by forming additional triple points. Due to extensive electron back bombardment during magnetron operation, secondary electron emission also may play a significant role. Cathodes exhibit increases in current densities of up to 80 A/cm 2 , and up to 15% improvement in current start up time, as compared to polished stainless steel cathodes

  19. Hydraulic failure defines the recovery and point of death in water-stressed conifers.

    Science.gov (United States)

    Brodribb, Tim J; Cochard, Hervé

    2009-01-01

    This study combines existing hydraulic principles with recently developed methods for probing leaf hydraulic function to determine whether xylem physiology can explain the dynamic response of gas exchange both during drought and in the recovery phase after rewatering. Four conifer species from wet and dry forests were exposed to a range of water stresses by withholding water and then rewatering to observe the recovery process. During both phases midday transpiration and leaf water potential (Psileaf) were monitored. Stomatal responses to Psileaf were established for each species and these relationships used to evaluate whether the recovery of gas exchange after drought was limited by postembolism hydraulic repair in leaves. Furthermore, the timing of gas-exchange recovery was used to determine the maximum survivable water stress for each species and this index compared with data for both leaf and stem vulnerability to water-stress-induced dysfunction measured for each species. Recovery of gas exchange after water stress took between 1 and >100 d and during this period all species showed strong 1:1 conformity to a combined hydraulic-stomatal limitation model (r2 = 0.70 across all plants). Gas-exchange recovery time showed two distinct phases, a rapid overnight recovery in plants stressed to 50% loss of Kleaf. Maximum recoverable water stress (Psimin) corresponded to a 95% loss of Kleaf. Thus, we conclude that xylem hydraulics represents a direct limit to the drought tolerance of these conifer species.

  20. In quest of a systematic framework for unifying and defining nanoscience

    International Nuclear Information System (INIS)

    Tomalia, Donald A.

    2009-01-01

    This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a 'central paradigm' (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core-shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this

  1. Profiling Occupant Behaviour in Danish Dwellings using Time Use Survey Data - Part II: Time-related Factors and Occupancy

    DEFF Research Database (Denmark)

    Barthelmes, V.M.; Li, R.; Andersen, R.K.

    2018-01-01

    Occupant behaviour has been shown to be one of the key driving factors of uncertainty in prediction of energy consumption in buildings. Building occupants affect building energy use directly and indirectly by interacting with building energy systems such as adjusting temperature set...... occupant profiles for prediction of energy use to reduce the gap between predicted and real building energy consumptions. In this study, we exploit diary-based Danish Time Use Surveys for understanding and modelling occupant behaviour in the residential sector in Denmark. This paper is a continuation......-points, switching lights on/off, using electrical devices and opening/closing windows. Furthermore, building inhabitants’ daily activity profiles clearly shape the timing of energy demand in households. Modelling energy-related human activities throughout the day, therefore, is crucial to defining more realistic...

  2. Constructing Episodes of Inpatient Care: How to Define Hospital Transfer in Hospital Administrative Health Data?

    Science.gov (United States)

    Peng, Mingkai; Li, Bing; Southern, Danielle A; Eastwood, Cathy A; Quan, Hude

    2017-01-01

    Hospital administrative health data create separate records for each hospital stay of patients. Treating a hospital transfer as a readmission could lead to biased results in health service research. This is a cross-sectional study. We used the hospital discharge abstract database in 2013 from Alberta, Canada. Transfer cases were defined by transfer institution code and were used as the reference standard. Four time gaps between 2 hospitalizations (6, 9, 12, and 24 h) and 2 day gaps between hospitalizations [same day (up to 24 h), ≤1 d (up to 48 h)] were used to identify transfer cases. We compared the sensitivity and positive predictive value (PPV) of 6 definitions across different categories of sex, age, and location of residence. Readmission rates within 30 days were compared after episodes of care were defined at the different time gaps. Among the 6 definitions, sensitivity ranged from 93.3% to 98.7% and PPV ranged from 86.4% to 96%. The time gap of 9 hours had the optimal balance of sensitivity and PPV. The time gaps of same day (up to 24 h) and 9 hours had comparable 30-day readmission rates as the transfer indicator after defining episode of care. We recommend the use of a time gap of 9 hours between 2 hospitalizations to define hospital transfer in inpatient databases. When admission or discharge time is not available in the database, a time gap of same day (up to 24 h) can be used to define hospital transfer.

  3. [A landscape ecological approach for urban non-point source pollution control].

    Science.gov (United States)

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  4. Black holes and trapped points

    International Nuclear Information System (INIS)

    Krolak, A.

    1981-01-01

    Black holes are defined and their properties investigated without use of any global causality restriction. Also the boundary at infinity of space-time is not needed. When the causal conditions are brought in, the equivalence with the usual approach is established. (author)

  5. Managing distance and covariate information with point-based clustering

    Directory of Open Access Journals (Sweden)

    Peter A. Whigham

    2016-09-01

    Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for

  6. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    Science.gov (United States)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  7. Effects of Point Count Duration, Time-of-Day, and Aural Stimuli on Detectability of Migratory and Resident Bird Species in Quintana Roo, Mexico

    Science.gov (United States)

    James F. Lynch

    1995-01-01

    Effects of count duration, time-of-day, and aural stimuli were studied in a series of unlimited-radius point counts conducted during winter in Quintana Roo, Mexico. The rate at which new species were detected was approximately three times higher during the first 5 minutes of each 15- minute count than in the final 5 minutes. The number of individuals and species...

  8. Diagnostic value of ST-segment deviations during cardiac exercise stress testing: Systematic comparison of different ECG leads and time-points.

    Science.gov (United States)

    Puelacher, Christian; Wagener, Max; Abächerli, Roger; Honegger, Ursina; Lhasam, Nundsin; Schaerli, Nicolas; Prêtre, Gil; Strebel, Ivo; Twerenbold, Raphael; Boeddinghaus, Jasper; Nestelberger, Thomas; Rubini Giménez, Maria; Hillinger, Petra; Wildi, Karin; Sabti, Zaid; Badertscher, Patrick; Cupa, Janosch; Kozhuharov, Nikola; du Fay de Lavallaz, Jeanne; Freese, Michael; Roux, Isabelle; Lohrmann, Jens; Leber, Remo; Osswald, Stefan; Wild, Damian; Zellweger, Michael J; Mueller, Christian; Reichlin, Tobias

    2017-07-01

    Exercise ECG stress testing is the most widely available method for evaluation of patients with suspected myocardial ischemia. Its major limitation is the relatively poor accuracy of ST-segment changes regarding ischemia detection. Little is known about the optimal method to assess ST-deviations. A total of 1558 consecutive patients undergoing bicycle exercise stress myocardial perfusion imaging (MPI) were enrolled. Presence of inducible myocardial ischemia was adjudicated using MPI results. The diagnostic value of ST-deviations for detection of exercise-induced myocardial ischemia was systematically analyzed 1) for each individual lead, 2) at three different intervals after the J-point (J+40ms, J+60ms, J+80ms), and 3) at different time points during the test (baseline, maximal workload, 2min into recovery). Exercise-induced ischemia was detected in 481 (31%) patients. The diagnostic accuracy of ST-deviations was highest at +80ms after the J-point, and at 2min into recovery. At this point, ST-amplitude showed an AUC of 0.63 (95% CI 0.59-0.66) for the best-performing lead I. The combination of ST-amplitude and ST-slope in lead I did not increase the AUC. Lead I reached a sensitivity of 37% and a specificity of 83%, with similar sensitivity to manual ECG analysis (34%, p=0.31) but lower specificity (90%, pST-deviations is highest when evaluated at +80ms after the J-point, and at 2min into recovery. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Recursive evaluation of space-time lattice Green's functions

    International Nuclear Information System (INIS)

    De Hon, Bastiaan P; Arnold, John M

    2012-01-01

    Up to a multiplicative constant, the lattice Green's function (LGF) as defined in condensed matter physics and lattice statistical mechanics is equivalent to the Z-domain counterpart of the finite-difference time-domain Green's function (GF) on a lattice. Expansion of a well-known integral representation for the LGF on a ν-dimensional hyper-cubic lattice in powers of Z −1 and application of the Chu–Vandermonde identity results in ν − 1 nested finite-sum representations for discrete space-time GFs. Due to severe numerical cancellations, these nested finite sums are of little practical use. For ν = 2, the finite sum may be evaluated in closed form in terms of a generalized hypergeometric function. For special lattice points, that representation simplifies considerably, while on the other hand the finite-difference stencil may be used to derive single-lattice-point second-order recurrence schemes for generating 2D discrete space-time GF time sequences on the fly. For arbitrary symbolic lattice points, Zeilberger's algorithm produces a third-order recurrence operator with polynomial coefficients of the sixth degree. The corresponding recurrence scheme constitutes the most efficient numerical method for the majority of lattice points, in spite of the fact that for explicit numeric lattice points the associated third-order recurrence operator is not the minimum recurrence operator. As regards the asymptotic bounds for the possible solutions to the recurrence scheme, Perron's theorem precludes factorial or exponential growth. Along horizontal lattices directions, rapid initial growth does occur, but poses no problems in augmented dynamic-range fixed precision arithmetic. By analysing long-distance wave propagation along a horizontal lattice direction, we have concluded that the chirp-up oscillations of the discrete space-time GF are the root cause of grid dispersion anisotropy. With each factor of ten increase in the lattice distance, one would have to roughly

  10. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  11. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  12. AUTOMATIC RECOGNITION OF INDOOR NAVIGATION ELEMENTS FROM KINECT POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    L. Zeng

    2017-09-01

    Full Text Available This paper realizes automatically the navigating elements defined by indoorGML data standard – door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor – histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor – in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  13. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    Science.gov (United States)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  14. Common Fixed Points via λ-Sequences in G-Metric Spaces

    Directory of Open Access Journals (Sweden)

    Yaé Ulrich Gaba

    2017-01-01

    Full Text Available We use λ-sequences in this article to derive common fixed points for a family of self-mappings defined on a complete G-metric space. We imitate some existing techniques in our proofs and show that the tools employed can be used at a larger scale. These results generalize well known results in the literature.

  15. The evolving block universe and the meshing together of times.

    Science.gov (United States)

    Ellis, George F R

    2014-10-01

    It has been proposed that spacetime should be regarded as an evolving block universe, bounded to the future by the present time, which continually extends to the future. This future boundary is defined at each time by measuring proper time along Ricci eigenlines from the start of the universe. A key point, then, is that physical reality can be represented at many different scales: hence, the passage of time may be seen as different at different scales, with quantum gravity determining the evolution of spacetime itself at the Planck scale, but quantum field theory and classical physics determining the evolution of events within spacetime at larger scales. The fundamental issue then arises as to how the effective times at different scales mesh together, leading to the concepts of global and local times. © 2014 New York Academy of Sciences.

  16. Undergraduate Consent Form Reading in Relation to Conscientiousness, Procrastination, and the Point-of-Time Effect.

    Science.gov (United States)

    Theiss, Justin D; Hobbs, William B; Giordano, Peter J; Brunson, Olivia M

    2014-07-01

    Informed consent is central to conducting ethical research with human participants. The present study investigated differences in consent form reading in relation to conscientiousness, procrastination, and the point-of-time (PT) effect among undergraduate participants at a U.S. university. As hypothesized, conscientious participants and those who signed up to participate in a research study more days in advance and for earlier sessions (PT effect) read the consent form more thoroughly. However, procrastination was not related to consent form reading. Most importantly, consent form reading in general was poor, with 80% of participants demonstrating that they had not read the consent form. Conscientious participants were more likely to self-report reading the consent form, irrespective of their measured consent form reading. The article closes with suggestions to improve the process of obtaining informed consent with undergraduate participants. © The Author(s) 2014.

  17. A new diagnostic accuracy measure and cut-point selection criterion.

    Science.gov (United States)

    Dong, Tuochuan; Attwood, Kristopher; Hutson, Alan; Liu, Song; Tian, Lili

    2017-12-01

    Most diagnostic accuracy measures and criteria for selecting optimal cut-points are only applicable to diseases with binary or three stages. Currently, there exist two diagnostic measures for diseases with general k stages: the hypervolume under the manifold and the generalized Youden index. While hypervolume under the manifold cannot be used for cut-points selection, generalized Youden index is only defined upon correct classification rates. This paper proposes a new measure named maximum absolute determinant for diseases with k stages ([Formula: see text]). This comprehensive new measure utilizes all the available classification information and serves as a cut-points selection criterion as well. Both the geometric and probabilistic interpretations for the new measure are examined. Power and simulation studies are carried out to investigate its performance as a measure of diagnostic accuracy as well as cut-points selection criterion. A real data set from Alzheimer's Disease Neuroimaging Initiative is analyzed using the proposed maximum absolute determinant.

  18. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework.

    Science.gov (United States)

    Pant Pai, Nitika; Chiavegatti, Tiago; Vijh, Rohit; Karatzas, Nicolaos; Daher, Jana; Smallwood, Megan; Wong, Tom; Engel, Nora

    2017-12-01

    Pilot (feasibility) studies form a vast majority of diagnostic studies with point-of-care technologies but often lack use of clear measures/metrics and a consistent framework for reporting and evaluation. To fill this gap, we systematically reviewed data to ( a ) catalog feasibility measures/metrics and ( b ) propose a framework. For the period January 2000 to March 2014, 2 reviewers searched 4 databases (MEDLINE, EMBASE, CINAHL, Scopus), retrieved 1441 citations, and abstracted data from 81 studies. We observed 2 major categories of measures, that is, implementation centered and patient centered, and 4 subcategories of measures, that is, feasibility, acceptability, preference, and patient experience. We defined and delineated metrics and measures for a feasibility framework. We documented impact measures for a comparison. We observed heterogeneity in reporting of metrics as well as misclassification and misuse of metrics within measures. Although we observed poorly defined measures and metrics for feasibility, preference, and patient experience, in contrast, acceptability measure was the best defined. For example, within feasibility, metrics such as consent, completion, new infection, linkage rates, and turnaround times were misclassified and reported. Similarly, patient experience was variously reported as test convenience, comfort, pain, and/or satisfaction. In contrast, within impact measures, all the metrics were well documented, thus serving as a good baseline comparator. With our framework, we classified, delineated, and defined quantitative measures and metrics for feasibility. Our framework, with its defined measures/metrics, could reduce misclassification and improve the overall quality of reporting for monitoring and evaluation of rapid point-of-care technology strategies and their context-driven optimization.

  19. Defining Nitrogen Kinetics for Air Break in Prebreath

    Science.gov (United States)

    Conkin, Johnny

    2010-01-01

    Actual tissue nitrogen (N2) kinetics are complex; the uptake and elimination is often approximated with a single half-time compartment in statistical descriptions of denitrogenation [prebreathe(PB)] protocols. Air breaks during PB complicate N2 kinetics. A comparison of symmetrical versus asymmetrical N2 kinetics was performed using the time to onset of hypobaric decompression sickness (DCS) as a surrogate for actual venous N2 tension. METHODS: Published results of 12 tests involving 179 hypobaric exposures in altitude chambers after PB, with and without airbreaks, provide the complex protocols from which to model N2 kinetics. DCS survival time for combined control and airbreaks were described with an accelerated log logistic model where N2 uptake and elimination before, during, and after the airbreak was computed with a simple exponential function or a function that changed half-time depending on ambient N2 partial pressure. P1N2-P2 = (Delta)P defined decompression dose for each altitude exposure, where P2 was the test altitude and P1N2 was computed N2 pressure at the beginning of the altitude exposure. RESULTS: The log likelihood (LL) without decompression dose (null model) was -155.6, and improved (best-fit) to -97.2 when dose was defined with a 240 min half-time for both N2 elimination and uptake during the PB. The description of DCS survival time was less precise with asymmetrical N2 kinetics, for example, LL was -98.9 with 240 min half-time elimination and 120 min half-time uptake. CONCLUSION: The statistical regression described survival time mechanistically linked to symmetrical N2 kinetics during PBs that also included airbreaks. The results are data-specific, and additional data may change the conclusion. The regression is useful to compute additional PB time to compensate for an airbreak in PB within the narrow range of tested conditions.

  20. Defining Nitrogen Kinetics for Air Break in Prebreathe

    Science.gov (United States)

    Conkin, Johnny

    2009-01-01

    Actual tissue nitrogen (N2) kinetics are complex; the uptake and elimination is often approximated with a single half-time compartment in statistical descriptions of denitrogenation [prebreathe (PB)] protocols. Air breaks during PB complicate N2 kinetics. A comparison of symmetrical versus asymmetrical N2 kinetics was performed using the time to onset of hypobaric decompression sickness (DCS) as a surrogate for actual venous N2 tension. Published results of 12 tests involving 179 hypobaric exposures in altitude chambers after PB, with and without air breaks, provide the complex protocols from which to model N2 kinetics. DCS survival time for combined control and air breaks were described with an accelerated log logistic model where N2 uptake and elimination before, during, and after the air break was computed with a simple exponential function or a function that changed half-time depending on ambient N2 partial pressure. P1N2-P2 = delta P defined DCS dose for each altitude exposure, where P2 was the test altitude and P1N2 was computed N2 pressure at the beginning of the altitude exposure. The log likelihood (LL) without DCS dose (null model) was -155.6, and improved (best-fit) to -97.2 when dose was defined with a 240 min half-time for both N2 elimination and uptake during the PB. The description of DCS survival time was less precise with asymmetrical N2 kinetics, for example, LL was -98.9 with 240 min half-time elimination and 120 min half-time uptake. The statistical regression described survival time mechanistically linked to symmetrical N2 kinetics during PBs that also included air breaks. The results are data-specific, and additional data may change the conclusion. The regression is useful to compute additional PB time to compensate for an air break in PB within the narrow range of tested conditions.

  1. Multiple Time-Point 68Ga-PSMA I&T PET/CT for Characterization of Primary Prostate Cancer: Value of Early Dynamic and Delayed Imaging.

    Science.gov (United States)

    Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten

    2017-06-01

    The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.

  2. Discrete-Time receivers for software-defined radio: challenges and solutions

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2007-01-01

    Abstract—CMOS radio receiver architectures, based on radio frequency (RF) sampling followed by discrete-time (DT) signal processing via switched-capacitor circuits, have recently been proposed for dedicated radio standards. This paper explores the suitability of such DT receivers for highly flexible

  3. 45 CFR 506.11 - “Prisoner of war” defined.

    Science.gov (United States)

    2010-10-01

    ... OF THE WAR CLAIMS ACT OF 1948, AS AMENDED ELIGIBILITY REQUIREMENTS FOR COMPENSATION Prisoners of War § 506.11 “Prisoner of war” defined. Prisoner of war means any regularly appointed, enrolled, enlisted or... States for any period of time during the Vietnam conflict. ...

  4. Defining nodes in complex brain networks

    Directory of Open Access Journals (Sweden)

    Matthew Lawrence Stanley

    2013-11-01

    Full Text Available Network science holds great promise for expanding our understanding of the human brain in health, disease, development, and aging. Network analyses are quickly becoming the method of choice for analyzing functional MRI data. However, many technical issues have yet to be confronted in order to optimize results. One particular issue that remains controversial in functional brain network analyses is the definition of a network node. In functional brain networks a node represents some predefined collection of brain tissue, and an edge measures the functional connectivity between pairs of nodes. The characteristics of a node, chosen by the researcher, vary considerably in the literature. This manuscript reviews the current state of the art based on published manuscripts and highlights the strengths and weaknesses of three main methods for defining nodes. Voxel-wise networks are constructed by assigning a node to each, equally sized brain area (voxel. The fMRI time-series recorded from each voxel is then used to create the functional network. Anatomical methods utilize atlases to define the nodes based on brain structure. The fMRI time-series from all voxels within the anatomical area are averaged and subsequently used to generate the network. Functional activation methods rely on data from traditional fMRI activation studies, often from databases, to identify network nodes. Such methods identify the peaks or centers of mass from activation maps to determine the location of the nodes. Small (~10-20 millimeter diameter spheres located at the coordinates of the activation foci are then applied to the data being used in the network analysis. The fMRI time-series from all voxels in the sphere are then averaged, and the resultant time series is used to generate the network. We attempt to clarify the discussion and move the study of complex brain networks forward. While the correct method to be used remains an open, possibly unsolvable question that

  5. Stability analysis of switched linear systems defined by graphs

    OpenAIRE

    Athanasopoulos, Nikolaos; Lazar, Mircea

    2015-01-01

    We present necessary and sufficient conditions for global exponential stability for switched discrete-time linear systems, under arbitrary switching, which is constrained within a set of admissible transitions. The class of systems studied includes the family of systems under arbitrary switching, periodic systems, and systems with minimum and maximum dwell time specifications. To reach the result, we describe the set of rules that define the admissible transitions with a weighted directed gra...

  6. Longer wait times affect future use of VHA primary care.

    Science.gov (United States)

    Wong, Edwin S; Liu, Chuan-Fen; Hernandez, Susan E; Augustine, Matthew R; Nelson, Karin; Fihn, Stephan D; Hebert, Paul L

    2017-07-29

    Improving access to the Veterans Health Administration (VHA) is a high priority, particularly given statutory mandates of the Veterans Access, Choice and Accountability Act. This study examined whether patient-reported wait times for VHA appointments were associated with future reliance on VHA primary care services. This observational study examined 13,595 VHA patients dually enrolled in fee-for-service Medicare. Data sources included VHA administrative data, Medicare claims and the Survey of Healthcare Experiences of Patients (SHEP). Primary care use was defined as the number of face-to-face visits from VHA and Medicare in the 12 months following SHEP completion. VHA reliance was defined as the number of VHA visits divided by total visits (VHA+Medicare). Wait times were derived from SHEP responses measuring the usual number of days to a VHA appointment with patients' primary care provider for those seeking immediate care. We defined appointment wait times categorically: 0 days, 1day, 2-3 days, 4-7 days and >7 days. We used fractional logistic regression to examine the relationship between wait times and reliance. Mean VHA reliance was 88.1% (95% CI = 86.7% to 89.5%) for patients reporting 0day waits. Compared with these patients, reliance over the subsequent year was 1.4 (p = 0.041), 2.8 (p = 0.001) and 1.6 (p = 0.014) percentage points lower for patients waiting 2-3 days, 4-7 days and >7 days, respectively. Patients reporting longer usual wait times for immediate VHA care exhibited lower future reliance on VHA primary care. Longer wait times may reduce care continuity and impact cost shifting across two federal health programs. Copyright © 2017. Published by Elsevier Inc.

  7. Defining and systematic analyses of aggregation indices to evaluate degree of calcium oxalate crystal aggregation

    Science.gov (United States)

    Chaiyarit, Sakdithep; Thongboonkerd, Visith

    2017-12-01

    Crystal aggregation is one of the most crucial steps in kidney stone pathogenesis. However, previous studies of crystal aggregation were rarely done and quantitative analysis of aggregation degree was handicapped by a lack of the standard measurement. We thus performed an in vitro assay to generate aggregation of calcium oxalate monohydrate (COM) crystals with various concentrations (25-800 µg/ml) in saturated aggregation buffer. The crystal aggregates were analyzed by microscopic examination, UV-visible spectrophotometry, and GraphPad Prism6 software to define a total of 12 aggregation indices (including number of aggregates, aggregated mass index, optical density, aggregation coefficient, span, number of aggregates at plateau time-point, aggregated area index, aggregated diameter index, aggregated symmetry index, time constant, half-life, and rate constant). The data showed linear correlation between crystal concentration and almost all of these indices, except only for rate constant. Among these, number of aggregates provided the greatest regression coefficient (r=0.997; pr=0.993; pr=‑0.993; pr=0.991; p<0.001 for both). These five indices are thus recommended as the most appropriate indices for quantitative analysis of COM crystal aggregation in vitro.

  8. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Oriti, Daniele [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); Gielen, Steffen [Max-Planck-Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany); DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2011-07-01

    We discuss the path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions, with particular but non-exclusive reference to loop quantum cosmology (LQC). Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  9. Two-point functions in (loop) quantum cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele, E-mail: calcagni@aei.mpg.de, E-mail: gielen@aei.mpg.de, E-mail: doriti@aei.mpg.de [Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muehlenberg 1, D-14476 Golm (Germany)

    2011-06-21

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  10. Two-point functions in (loop) quantum cosmology

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Gielen, Steffen; Oriti, Daniele

    2011-01-01

    The path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories of volume transitions is discussed, with particular but non-exclusive reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, pointing out the choices involved in their definitions, deriving their vertex expansions and the composition laws they satisfy. We clarify the origin and relations of different quantities previously defined in the literature, in particular the tie between definitions using a group averaging procedure and those in a deparametrized framework. Finally, we draw some conclusions about the physics of a single quantum universe (where there exist superselection rules on positive- and negative-frequency sectors and different choices of inner product are physically equivalent) and multiverse field theories where the role of these sectors and the inner product are reinterpreted.

  11. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  12. The Time 'Onewayness' Shared by Quantum Mechanics and Relativity

    International Nuclear Information System (INIS)

    Guzzetta, Giuseppe

    2006-01-01

    The measure of the mutation, or change, any material elementary particle unceasingly undergoes, is defined as that of the displacement of a point moving in a three-dimensional Euclidean space, at the velocity of light, on a trajectory decomposable in a rotation and a translation. The rotation accounts for the spin angular momentum of the particle, the translation for its change of location. Then, an elementary mutation is proportional to an elementary interval of universal time. The connection between space and time is such that the operation of universal time conjugation, that is, the change of sign of t, involves space inversion, so coinciding with the operation currently defined as TCP. It implies that to a given physical process, another equally possible one corresponds in which the sequence of events (that still follow the same time course) is reversed, and actors are the enantiomorphic counterparts (anti-particles instead of particles, and vice versa) of those playing in the first physical process. Since no alternative is left to any elementary particle, that exists in that it undergoes an everlasting mutation, the unidirectionality of time must not be understood as a choice between two alternative directions. Many formalisms of Special Relativity can be derived from the above definition of the mutation of a material elementary particle. Anyhow, some discordances seems to crop out whose discussion is beyond the purpose of the present paper

  13. On zero-point energy, stability and Hagedorn behavior of Type IIB strings on pp-waves

    International Nuclear Information System (INIS)

    Bigazzi, F.; Cotrone, A.L.

    2003-06-01

    Type IIB strings on many pp-wave backgrounds, supported either by 5-form or 3-form fluxes, have negative light-cone zero-point energy. This raises the question of their stability and poses possible problems in the definition of their thermodynamic properties. After having pointed out the correct way of calculating the zero-point energy, an issue not fully discussed in literature, we show that these Type IIB strings are classically stable and have well defined thermal properties, exhibiting a Hagedorn behavior. (author)

  14. Software-defined reconfigurable microwave photonics processor.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Capmany, José

    2015-06-01

    We propose, for the first time to our knowledge, a software-defined reconfigurable microwave photonics signal processor architecture that can be integrated on a chip and is capable of performing all the main functionalities by suitable programming of its control signals. The basic configuration is presented and a thorough end-to-end design model derived that accounts for the performance of the overall processor taking into consideration the impact and interdependencies of both its photonic and RF parts. We demonstrate the model versatility by applying it to several relevant application examples.

  15. The power of PowerPoint.

    Science.gov (United States)

    Niamtu , J

    2001-08-01

    Carousel slide presentations have been used for academic and clinical presentations since the late 1950s. However, advances in computer technology have caused a paradigm shift, and digital presentations are quickly becoming standard for clinical presentations. The advantages of digital presentations include cost savings; portability; easy updating capability; Internet access; multimedia functions, such as animation, pictures, video, and sound; and customization to augment audience interest and attention. Microsoft PowerPoint has emerged as the most popular digital presentation software and is currently used by many practitioners with and without significant computer expertise. The user-friendly platform of PowerPoint enables even the novice presenter to incorporate digital presentations into his or her profession. PowerPoint offers many advanced options that, with a minimal investment of time, can be used to create more interactive and professional presentations for lectures, patient education, and marketing. Examples of advanced PowerPoint applications are presented in a stepwise manner to unveil the full power of PowerPoint. By incorporating these techniques, medical practitioners can easily personalize, customize, and enhance their PowerPoint presentations. Complications, pitfalls, and caveats are discussed to detour and prevent misadventures in digital presentations. Relevant Web sites are listed to further update, customize, and communicate PowerPoint techniques.

  16. Predictors of exclusive breastfeeding across three time points in Bangladesh: an examination of the 2007, 2011 and 2014 Demographic and Health Survey.

    Science.gov (United States)

    Blackstone, Sarah R; Sanghvi, Tina

    2018-05-01

    The objective of this study was to explore predictors of exclusive breastfeeding (EBF) in Bangladesh using data from 2007, 2011 and 2014, specifically focusing on potential reasons why rates of EBF changed over those time periods. Data on mother/infant pairs with infants <6 months of age were examined at the three time points using the Bangladesh Demographic and Health Survey. The EBF prevalence, changes in EBF since the previous survey and determinants of EBF at each time period were examined using t-tests, χ2 and multilevel logistic regression. The prevalence of EBF was 42.5, 65 and 59.4% in 2007, 2011 and 2014, respectively. The age of the child was significantly associated with EBF across all time points. The largest changes in EBF occurred in the 3- to 5-month age group. Predictors of EBF in this specific age group were similar to overall predictors (e.g. age of the child and region). Participation of the mother in household decisions was a significant predictor in 2014. EBF prevalence in Bangladesh increased between 2007 and 2011 and then decreased between 2011 and 2014. The increase in 2011 may have been the result of widespread initiatives to promote EBF in that time frame. Due to the unexplained decrease in EBF between 2011 and 2014, there is still a need for interventions such as peer counselling, antenatal education and community awareness to promote EBF.

  17. Fossils, molecules, divergence times, and the origin of lissamphibians.

    Science.gov (United States)

    Marjanović, David; Laurin, Michel

    2007-06-01

    A review of the paleontological literature shows that the early dates of appearance of Lissamphibia recently inferred from molecular data do not favor an origin of extant amphibians from temnospondyls, contrary to recent claims. A supertree is assembled using new Mesquite modules that allow extinct taxa to be incorporated into a time-calibrated phylogeny with a user-defined geological time scale. The supertree incorporates 223 extinct species of lissamphibians and has a highly significant stratigraphic fit. Some divergences can even be dated with sufficient precision to serve as calibration points in molecular divergence date analyses. Fourteen combinations of minimal branch length settings and 10 random resolutions for each polytomy give much more recent minimal origination times of lissamphibian taxa than recent studies based on a phylogenetic analyses of molecular sequences. Attempts to replicate recent molecular date estimates show that these estimates depend strongly on the choice of calibration points, on the dating method, and on the chosen model of evolution; for instance, the estimate for the date of the origin of Lissamphibia can lie between 351 and 266 Mya. This range of values is generally compatible with our time-calibrated supertree and indicates that there is no unbridgeable gap between dates obtained using the fossil record and those using molecular evidence, contrary to previous suggestions.

  18. Cardinal and anti-cardinal points, equalities and chromatic dependence.

    Science.gov (United States)

    Evans, Tanya; Harris, William F

    2017-05-01

    when defining concepts that rely on cardinal points that depend on frequency. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  19. Quad channel software defined receiver for passive radar application

    Directory of Open Access Journals (Sweden)

    Pető Tamás

    2017-03-01

    Full Text Available In recent times the growing utilization of the electromagnetic environment brings the passive radar researches more and more to the fore. For the utilization of the wide range of illuminators of opportunity the application of wideband radio receivers is required. At the same time the multichannel receiver structure has also critical importance in target direction finding and interference suppression. This paper presents the development of a multichannel software defined receiver specifically for passive radar applications. One of the relevant feature of the developed receiver platform is its up-to-date SoC (System on hip based structure, which greatly enhance the integration and signal processing capacity of the system, all while keeping the costs low. The software defined operation of the discussed receiver system is demonstrated with using DVB-T (Digital Video Broadcast – Terrestrial signal as illuminator of opportunity. During this demonstration the multichannel capabilities of the realized system are also tested with real data using direction finding and beamforming algorithms.

  20. Three dimensional Dirac point at k=0 in photonic and phononic systems

    OpenAIRE

    Huang, Xueqin; Liu, Fengming; Chan, C. T.

    2012-01-01

    While "Dirac cone" dispersions can only be meaningfully defined in two dimensional (2D) systems, the notion of a Dirac point can be extended to three dimensional (3D) classical wave systems. We show that a simple cubic photonic crystal composing of core-shell spheres exhibits a 3D Dirac point at the center of the Brillouin zone at a finite frequency. Using effective medium theory, we can map our structure to a zero refractive index material in which the effective permittivity and permeability...

  1. [Defining AIDS terminology. A practical approach].

    Science.gov (United States)

    Locutura, Jaime; Almirante, Benito; Berenguer, Juan; Muñoz, Agustín; Peña, José María

    2003-01-01

    Since the appearance of AIDS, the study of this disease has generated a large amount of information and an extensive related vocabulary comprised of new terms or terms borrowed from other scientific fields. The urgent need to provide names for newly described phenomena and concepts in this field has resulted in the application of terms that are not always appropriate from the linguistic and scientific points of view. We discuss the difficulties in attempting to create adequate AIDS terminology in the Spanish language, considering both the general problems involved in building any scientific vocabulary and the specific problems inherent to this activity in a field whose defining illness has important social connotations. The pressure exerted by the predominance of the English language in reporting scientific knowledge is considered, and the inappropriate words most often found in a review of current literature are examined. Finally, attending to the two most important criteria for the creation of new scientific terms, accuracy and linguistic correction, we propose some well thought-out alternatives that conform to the essence of the Spanish language.

  2. Gluon 2- and 3-Point Correlation Functions on the Lattice

    OpenAIRE

    Parrinello, Claudio

    1993-01-01

    I present some preliminary results, obtained in collaboration with C. Bernard and A. Soni, for the lattice evaluation of 2- and 3-point gluon correlation functions in momentum space, with emphasis on the amputated 3-gluon vertex function. The final goal of this approach is the study of the running QCD coupling constant as defined from the amputated 3-gluon vertex.

  3. 2- and 3-point gluon correlation functions on the lattice

    Energy Technology Data Exchange (ETDEWEB)

    Parrinello, C. (Dept. of Physics, Univ. of Edinburgh (United Kingdom))

    1994-04-01

    I present some preliminary results, obtained in collaboration with C. Bernard and A. Soni, for the lattice evaluation of 2- and 3-point gluon correlation functions in momentum space, with emphasis on the amputated 3-gluon vertex function. The final goal of this approach is the study of the running QCD coupling constant as defined from the amputated 3-gluon vertex. (orig.)

  4. Motor synergies and the equilibrium-point hypothesis.

    Science.gov (United States)

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  5. A systematic review of near real-time and point-of-care clinical decision support in anesthesia information management systems.

    Science.gov (United States)

    Simpao, Allan F; Tan, Jonathan M; Lingappan, Arul M; Gálvez, Jorge A; Morgan, Sherry E; Krall, Michael A

    2017-10-01

    Anesthesia information management systems (AIMS) are sophisticated hardware and software technology solutions that can provide electronic feedback to anesthesia providers. This feedback can be tailored to provide clinical decision support (CDS) to aid clinicians with patient care processes, documentation compliance, and resource utilization. We conducted a systematic review of peer-reviewed articles on near real-time and point-of-care CDS within AIMS using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols. Studies were identified by searches of the electronic databases Medline and EMBASE. Two reviewers screened studies based on title, abstract, and full text. Studies that were similar in intervention and desired outcome were grouped into CDS categories. Three reviewers graded the evidence within each category. The final analysis included 25 articles on CDS as implemented within AIMS. CDS categories included perioperative antibiotic prophylaxis, post-operative nausea and vomiting prophylaxis, vital sign monitors and alarms, glucose management, blood pressure management, ventilator management, clinical documentation, and resource utilization. Of these categories, the reviewers graded perioperative antibiotic prophylaxis and clinical documentation as having strong evidence per the peer reviewed literature. There is strong evidence for the inclusion of near real-time and point-of-care CDS in AIMS to enhance compliance with perioperative antibiotic prophylaxis and clinical documentation. Additional research is needed in many other areas of AIMS-based CDS.

  6. Sleep Health: Can We Define It? Does It Matter?

    Science.gov (United States)

    Buysse, Daniel J.

    2014-01-01

    Good sleep is essential to good health. Yet for most of its history, sleep medicine has focused on the definition, identification, and treatment of sleep problems. Sleep health is a term that is infrequently used and even less frequently defined. It is time for us to change this. Indeed, pressures in the research, clinical, and regulatory environments require that we do so. The health of populations is increasingly defined by positive attributes such as wellness, performance, and adaptation, and not merely by the absence of disease. Sleep health can be defined in such terms. Empirical data demonstrate several dimensions of sleep that are related to health outcomes, and that can be measured with self-report and objective methods. One suggested definition of sleep health and a description of self-report items for measuring it are provided as examples. The concept of sleep health synergizes with other health care agendas, such as empowering individuals and communities, improving population health, and reducing health care costs. Promoting sleep health also offers the field of sleep medicine new research and clinical opportunities. In this sense, defining sleep health is vital not only to the health of populations and individuals, but also to the health of sleep medicine itself. Citation: Buysse DJ. Sleep health: can we define it? Does it matter? SLEEP 2014;37(1):9-17. PMID:24470692

  7. 47 CFR 51.331 - Notice of network changes: Timing of notice.

    Science.gov (United States)

    2010-10-01

    ... make/buy point is the point at which the incumbent LEC makes a definite decision to implement a network... changes at the make/buy point, as defined in paragraph (b) of this section, but at least 12 months before.../buy point, public notice must be given at the make/buy point, but at least six months before...

  8. [Management of intractable epistaxis and bleeding points localization].

    Science.gov (United States)

    Yang, Da-Zhang; Cheng, Jing-Ning; Han, Jun; Shu, Ping; Zhang, Hua

    2005-05-01

    To investigate the common nasal bleeding points and the management of intractable epistaxis. The bleeding points and its correlation with age distribution, surgical techniques as well as its effects were studied retrospectively in 92 patients, in whom the bleeding points were not found by routine nasal endoscopy and the hemorrhage was not controlled with standard nasal packing. The bleeding points were found in the following different sites: superior wall of inferior nasal meatus (56.5%, 52/92), olfactory cleft of nasal septum (27.2%, 25/92), posterosuperior wall of middle nasal meatus (8.7%, 8/92) and uncertain (7.6%, 7/92). The results showed that the bleeding points had correlation with age. Epistaxis was well controlled by electrocoagulation in 83 cases, gelfoam packing in 8 cases, and transcatheter maxillary artery embolization in 1 case. There were no complications during a followed-up for 1 - 3 months after management. Among the 92 cases, the numbers of treatment needed to stop bleeding were 82 cases (89.1%) after 1 time of treatment, 9 cases (9.8%) after 2 times and in one case (1.1%) after 4 times. Endoscopy combined with displacement of the middle and inferior turbinate gives good visualization and direct management of the deeply-sited bleeding points, which were difficult in localization. The combined method provides an effective and safe way to control intractable epistaxis.

  9. ALTERNATIVE METHODOLOGIES FOR THE ESTIMATION OF LOCAL POINT DENSITY INDEX: MOVING TOWARDS ADAPTIVE LIDAR DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-07-01

    Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for

  10. An application of a discrete fixed point theorem to the Cournot model

    OpenAIRE

    Sato, Junichi

    2008-01-01

    In this paper, we apply a discrete fixed point theorem of [7] to the Cournot model [1]. Then we can deal with the Cournot model where the production of the enterprises is discrete. To handle it, we define a discrete Cournot-Nash equilibrium, and prove its existence.

  11. Dissipative N-point-vortex Models in the Plane

    Science.gov (United States)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  12. Bilevel Optimization for Scene Segmentation of LiDAR Point Cloud

    Directory of Open Access Journals (Sweden)

    LI Minglei

    2018-02-01

    Full Text Available The segmentation of point clouds obtained by light detection and ranging (LiDAR systems is a critical step for many tasks,such as data organization,reconstruction and information extraction.In this paper,we propose a bilevel progressive optimization algorithm based on the local differentiability.First,we define the topological relation and distance metric of points in the framework of Riemannian geometry,and in the point-based level using k-means method generates over-segmentation results,e.g.super voxels.Then these voxels are formulated as nodes which consist a minimal spanning tree.High level features are extracted from voxel structures,and a graph-based optimization method is designed to yield the final adaptive segmentation results.The implementation experiments on real data demonstrate that our method is efficient and superior to state-of-the-art methods.

  13. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    Directory of Open Access Journals (Sweden)

    Jinjun Tang

    Full Text Available Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN, two learning processes are proposed: (1 a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2 a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE, root mean square error (RMSE, and mean absolute relative error (MARE are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR, instantaneous model (IM, linear model (LM, neural network (NN, and cumulative plots (CP.

  14. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    Science.gov (United States)

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  15. Determine point-to-point networking interactions using regular expressions

    Directory of Open Access Journals (Sweden)

    Konstantin S. Deev

    2015-06-01

    Full Text Available As Internet growth and becoming more popular, the number of concurrent data flows start to increasing, which makes sense in bandwidth requested. Providers and corporate customers need ability to identify point-to-point interactions. The best is to use special software and hardware implementations that distribute the load in the internals of the complex, using the principles and approaches, in particular, described in this paper. This paper represent the principles of building system, which searches for a regular expression match using computing on graphics adapter in server station. A significant computing power and capability to parallel execution on modern graphic processor allows inspection of large amounts of data through sets of rules. Using the specified characteristics can lead to increased computing power in 30…40 times compared to the same setups on the central processing unit. The potential increase in bandwidth capacity could be used in systems that provide packet analysis, firewalls and network anomaly detectors.

  16. Ifcwall Reconstruction from Unstructured Point Clouds

    Science.gov (United States)

    Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.

    2018-05-01

    The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.

  17. Time-delayed autosynchronous swarm control.

    Science.gov (United States)

    Biggs, James D; Bennet, Derek J; Dadzie, S Kokou

    2012-01-01

    In this paper a general Morse potential model of self-propelling particles is considered in the presence of a time-delayed term and a spring potential. It is shown that the emergent swarm behavior is dependent on the delay term and weights of the time-delayed function, which can be set to induce a stationary swarm, a rotating swarm with uniform translation, and a rotating swarm with a stationary center of mass. An analysis of the mean field equations shows that without a spring potential the motion of the center of mass is determined explicitly by a multivalued function. For a nonzero spring potential the swarm converges to a vortex formation about a stationary center of mass, except at discrete bifurcation points where the center of mass will periodically trace an ellipse. The analytical results defining the behavior of the center of mass are shown to correspond with the numerical swarm simulations.

  18. INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION

    Directory of Open Access Journals (Sweden)

    Michaela Prokešová

    2010-11-01

    Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.

  19. Four-point functions with a twist

    Energy Technology Data Exchange (ETDEWEB)

    Bargheer, Till [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2017-01-15

    We study the OPE of correlation functions of local operators in planar N=4 super Yang-Mills theory. The considered operators have an explicit spacetime dependence that is defined by twisting the translation generators with certain R-symmetry generators. We restrict to operators that carry a small number of excitations above the twisted BMN vacuum. The OPE limit of the four-point correlator is dominated by internal states with few magnons on top of the vacuum. The twisting directly couples all spacetime dependence of the correlator to these magnons. We analyze the OPE in detail, and single out the extremal states that have to cancel all double-trace contributions.

  20. Markov Random Field Restoration of Point Correspondences for Active Shape Modelling

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2004-01-01

    In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...