id
stringlengths
47
47
source
stringclasses
2 values
text
stringlengths
19
659k
<urn:uuid:2a092e3f-e0c2-42f6-8a53-3afb1ad6ae90>
seed
Volume 13, Number 1—January 2007 Death Rates from Malaria Epidemics, Burundi and Ethiopia Death rates exceeded emergency thresholds at 4 sites during epidemics of Plasmodium falciparum malaria in Burundi (2000–2001) and in Ethiopia (2003–2004). Deaths likely from malaria ranged from 1,000 to 8,900, depending on site, and accounted for 52% to 78% of total deaths. Earlier detection of malaria and better case management are needed. Plasmodium falciparum malaria epidemics are poorly documented, partly because they occur in remote, underresourced areas where proper data collection is difficult. Although the public health problems from these epidemics are well recognized (1,2), quantitative evidence of their effect on death rates is scarce (3). Hospital-based death data, when available, provide a grossly incomplete picture because most malaria patients do not seek healthcare and, thus, these cases are not reported (4). Thus, current estimates (2) rely on extrapolations of limited site-specific or empirical observations. Accurate information is needed not only to improve our knowledge of malaria epidemics, but also to assess progress of malaria control initiatives that aim to decrease deaths from malaria worldwide by 50% by 2010 (5). We report community-based death rates from 2 P. falciparum malaria epidemics (Burundi, 2000–2001; Ethiopia, 2003–2004) in which Médecins Sans Frontières intervened. Detailed information about these epidemics, their determinants, and their evolution is provided elsewhere (6). Briefly, the inhabitants of the Kayanza, Karuzi, and Ngozi provinces (population 1,415,900) of Burundi, which borders Rwanda, live in small farming villages, most at an altitude >1,500 m. Before the 2000–2001 epidemic, these areas were considered to have low malaria transmission. Rapid surveys of febrile outpatients confirmed the epidemic (>75% had P. falciparum infections; Médecins Sans Frontières, unpub. data). For all 3 provinces, 1,488,519 malaria cases were reported (attack rate 109.0%). Figure 1 shows the number of cases each month. In Kayanza, 462,454 cases were reported from September 2000 through May 2001 (attack rate 95.9%, average cases/month 51,383) (7); case counts peaked in January. In Karuzi, 625,751 cases were reported from October 2000 through March 2001 (attack rate 202.8%, average cases/month 10,429); case counts peaked in December (7). Ngozi reported 400,314 malaria cases from October 2000 through April 2001 (attack rate 67.7%, average cases/month 57,187); case count peaked in November (7). Damot Gale district (286,600 inhabitants, altitude 1,600–2,100 m), considered a low-transmission area, is located in Wolayita Zone, Southern Nations Nationalities and Peoples Region, central Ethiopia. The malaria epidemic was confirmed locally by a sharp increase in P. falciparum–positive results among children treated in Médecins Sans Frontières feeding centers; the increase started in July 2003 (6). Reported caseload decreased in August and September, probably because of drug shortages and subsequent untreated and unreported patients; caseload rose sharply in October, November, and December (Figure 2). During these 3 months in 2003, 10,308 cases were reported by the 8 district health facilities (attack rate 3.6%, average no. cases/month 3,436), more than 10-fold the corresponding total in 2002 (n = 744) (Médecins Sans Frontières, unpub. data). During the epidemics, a retrospective survey of deaths was conducted at each site. Surveys were approved by local authorities, and respondents gave oral consent. Thirty clusters of 30 households were selected by using 2- or 3-stage sampling (8). Households were defined as groups of persons who slept under the same roof under 1 family head at the time of the survey; occasional visitors were excluded. Selection within each cluster followed a standard rule of proximity (9). Information collected included number, age, and sex of persons living in the household; number of deaths (age, sex, and date of death) since the beginning of the recall period; and cause of death. Malaria was defined as the probable cause if a decedent’s household reported “presence of fever” (Burundi) or “fever and shivering without severe diarrhea or severe respiratory infection” (Ethiopia). Recall periods were defined by easily recognizable starting dates (Table 1). Data were analyzed by using EpiInfo (Centers for Disease Control and Prevention, Atlanta, GA, USA). Death rates were expressed as deaths/10,000 persons/day, and 95% confidence intervals (CIs) were adjusted for design effects. Mortality rates were compared with standard emergency thresholds of 1 death/10,000/day (crude mortality rate [CMR]) and 2 deaths/10,000/day (under 5 mortality rate [U5MR]) (10). Excess number of deaths probably due to malaria was estimated by applying the specific death rates due to self-reported malaria to the population and time period covered by each survey. CMR and U5MR exceeded respective emergency thresholds (Table 1). In the total population, proportion of deaths probably due to malaria varied from 51.7% (Karuzi) to 78.3% (Kayanza) and from 53.0% (Ngozi) to 64.3% (Kayanza) for children <5 years of age (Table 1). Deaths probably due to malaria ranged from 1,000 in Kayanza to 8,900 in Ngozi; >50% were among children <5 years (Table 2). Estimates reflect only portions of the epidemic periods (Table 2). When surveys covered most of the epidemic duration (74% in Ngozi, 85% in Karuzi, 83% in Damot Gale), malaria was the probable cause of death for a comparable proportion of the population (1.5% [8,900/574,400] in Ngozi, 0.9% [2,800/308,400] in Karuzi, and 1.9% [5,400/286,600] in Damot Gale). We provide novel data based on representative population sampling, rather than health facility–based reporting. P. falciparum epidemics seem responsible for high death rates: the estimated number of deaths probably due to malaria at our sites (≈18,000) represents about 10% of the worldwide total estimated annual deaths due to epidemic malaria (2). The limitations of retrospective mortality surveys are well known (11); hence, results should be interpreted with caution. Reporting bias was minimized by defining a limited recall period and by training interviewers extensively. In Kayanza, the survey was conducted before the epidemic peak; the estimated death rate may have been lower than average for the entire epidemic, which may have led to underestimation of the true death rate. Generally, postmortem diagnosis of malaria at the household level is difficult, and even advanced verbal autopsy techniques (not used in these surveys due to lack of skilled human resources) are of limited accuracy (12). Decedents’ next of kin may underreport or overreport certain signs and symptoms. Malaria deaths may thus have been overestimated, particularly in Burundi, where fever was the sole criterion of probable malaria; use of this 1 criterion may have masked other causes, such as acute respiratory infection. Furthermore, in 3 of the areas surveyed (Kayanza excepted), the epidemics occurred concurrently with nutritional crises. Malnutrition as a cause of death could not be assessed because of its implication in various infectious diseases, but high prevalence of malnutrition is usually associated with excess U5MR (13). Nevertheless, mortality rates among persons ≥5 years of age (CMR – [U5MR × proportion of children <5 years in survey sample]/[1 – proportion of children <5 years in survey sample]) were also elevated. Rates ranged from 0.5 in Kayanza to 1.7 in Damot Gale, higher than the expected rate of 0.27 in sub-Saharan Africa (14). In the absence of other specific causes of acute death for adults, we speculate that malaria was largely responsible for these excess deaths. At all sites, early warning systems were not operational and surveillance was ineffective, which led to substantial delays in epidemic detection (6). First-line treatment regimens (chloroquine in Burundi, sulfadoxine/pyrimethamine in Ethiopia) were not very effective. In Damot Gale, access to treatment was poor (data not shown), probably due to the dearth of health facilities. All these factors may have exacerbated the epidemics and contributed to excessive death rates. Early diagnosis and prompt treatment of malaria remain cornerstones of the global malaria control strategy (15). The degree to which these interventions will be made available will largely determine the death rates in future epidemics. Dr Guthmann is a physician and senior epidemiologist who has worked at Epicentre since January 2000. Although his main interest is the epidemiology of malaria, he has also conducted research on other topics such as leishmaniasis and measles. We are grateful to Médecins Sans Frontières personnel at headquarters and field staff who actively contributed to the studies. Each survey was supervised by an Epicentre epidemiologist. The work was done in collaboration with National Ministries of Health, which authorized inspection of records and provided the necessary information when appropriate. All surveys, as well as this review, were financed by Médecins Sans Frontières. - Najera JA. Prevention and control of malaria epidemics.Parassitologia. 1999;41:339–47. - Worrall E, Rietveld A, Delacollette C. The burden of malaria epidemics and cost-effectiveness of interventions in epidemic situations in Africa.Am J Trop Med Hyg. 2004;71(Suppl):136–40. - Snow RW, Craig M, Deichmann U, Marsh K. Estimating mortality, morbidity and disability due to malaria among Africa’s non-pregnant population.Bull World Health Organ. 1999;77:624–40. - Malakooti MA, Biomndo K, Shanks GD. Reemergence of epidemic malaria in the highlands of western Kenya.Emerg Infect Dis. 1998;4:671–6. - Nabarro DN, Tayler EM. The Roll Back Malaria campaign.Science. 1998;280:2067–8. - Checchi F, Cox J, Balkan S, Tamrat A, Priotto G, Alberti KP, Malaria epidemics and interventions, Kenya, Burundi, southern Sudan, and Ethiopia, 1999–2004.Emerg Infect Dis. 2006;12:1477–85. - Legros D, Dantoine F. Epidémie de paludisme du Burundi, Septembre 2000–Mai 2001. Paris: Epicentre; 2001. - Henderson RH, Sundaresan T. Cluster sampling to assess immunisation coverage: a review of experience with simplified sampling method.Bull World Health Organ. 1982;60:253–60. - Grein T, Checchi F, Escriba JM, Tamrat A, Karunakara U, Stokes C, Mortality among displaced former UNITA members and their families in Angola: a retrospective cluster survey.BMJ. 2003;327:650–4. - Salama P, Spiegel P, Talley L, Waldman R. Lessons learned from complex emergencies over past decade.Lancet. 2004;364:1801–13. - Checchi F, Roberts L. Interpreting and using mortality data in humanitarian emergencies: a primer for non-epidemiologists. HPN Network Paper 52. London: Overseas Development Institute; 2005. - Snow RW, Armstrong JR, Forster D, Winstanley MT, Marsh VM, Newton CR, Childhood deaths in Africa: uses and limitations of verbal autopsies.Lancet. 1992;340:351–5. - Standardized Monitoring and Assessment of Relief and Transitions. SMART Methodology, version 1, April 2006 [cited 2006 16 Nov]. Available from http://www.smartindicators.org/ - The Sphere Project. Sphere handbook (revised 2004) [cited 2006 16 Nov]. Available from http://www.sphereproject.org. - World Health Organization. Implementation of the global malaria control strategy. Geneva: The Organization; 1993. WHO Technical Report Series 839. Suggested citation for this article: Guthmann J-P, Bonnet M, Ahoua L, Dantoine F, Balkan S, van Herp M, et al. Death rates from malaria epidemics, Burundi and Ethiopia. Emerg Infect Dis [serial on the Internet]. 2007 Jan [date cited]. Available from http://wwwnc.cdc.gov/eid/article/13/1/06-0546.htm Comments to the Authors Lessons from the History of Quarantine, from Plague to Influenza A
<urn:uuid:0ec6d86a-0aa9-4564-b734-830ccceffd7d>
seed
Volume 14, Number 6—June 2008 Persistence of Yersinia pestis in Soil Under Natural Conditions As part of a fatal human plague case investigation, we showed that the plague bacterium, Yersinia pestis, can survive for at least 24 days in contaminated soil under natural conditions. These results have implications for defining plague foci, persistence, transmission, and bioremediation after a natural or intentional exposure to Y. pestis. Plague is a rare, but highly virulent, zoonotic disease characterized by quiescent and epizootic periods (1). Although the etiologic agent, Yersinia pestis, can be transmitted through direct contact with an infectious source or inhalation of infectious respiratory droplets, flea-borne transmission is the most common mechanism of exposure (1). Most human cases are believed to occur during epizootic periods when highly susceptible hosts die in large numbers and their fleas are forced to parasitize hosts upon which they would not ordinarily feed, including humans (2). Despite over a century of research, we lack a clear understanding of how Y. pestis is able to rapidly disseminate in host populations during epizootics or how it persists during interepizootic periods (2–6). What limits the geographic distribution of the organism is also unclear. For example, why is the plague bacterium endemic west of the 100th meridian in the United States, but not in eastern states despite several known introductions (7)? Persistence of Y. pestis in soil has been suggested as a possible mechanism of interepizootic persistence, epizootic spread, and as a factor defining plague foci (2,3,5,7,8). Although Y. pestis recently evolved from an enteric bacterium, Y. pseudotuberuclosis, that can survive for long periods in soil and water, studies have shown that selection for vector-borne transmission has resulted in the loss of many of these survival mechanisms. This suggests that long-term persistence outside of the host or vector is unlikely (9–11). Previous studies have demonstrated survival of Y. pestis in soil under artificial conditions (2,3,12–14). However, survival of Y. pestis in soil under natural exposure conditions has not been examined in North America. As part of an environmental investigation of a fatal human plague case in Grand Canyon National Park, Arizona, in 2007, we tested the viability of Y. pestis in naturally contaminated soil. The case-patient, a wildlife biologist, was infected through direct contact with a mountain lion carcass, which was subsequently confirmed to be positive for Y. pestis based on direct fluorescent antibody (DFA) testing (which targets the Y. pestis–specific F1 antigen), culture isolation, and lysis with a Y. pestis temperature-specific bacteriophage (15). The animal was wearing a radio collar, and we determined the date of its death (October 26, 2007) on the basis of its lack of movement. The case-patient had recorded the location at which he encountered the carcass and had taken photographs of the remains, which showed a large pool of blood in the soil under the animal’s mouth and nose. During our field investigation, ≈3 weeks after the mountain lion’s death, we used global positioning satellite coordinates and photographs to identify the exact location of the blood-contaminated soil. We collected ≈200 mL of soil from this location at depths of up to ≈15 cm from the surface. After collection, the soil was shipped for analysis to the Bacterial Diseases Branch of the Centers for Disease Control and Prevention in Fort Collins, Colorado. Four soil samples of ≈5 mL each were suspended in a total volume of 20 mL of sterile physiologic saline (0.85% NaCl). Samples were vortexed briefly and allowed to settle for ≈2 min before aliquots of 0.5 mL were drawn into individual syringes and injected subcutaneously into 4 Swiss-Webster strain mice (ACUC Protocol 00–06–018-MUS). Within 12 hours of inoculation, 1 mouse became moribund, and liver and spleen samples were cultured on cefsulodin-Irgasan-novobiocin agar. Colonies consistent with Y. pestis morphology were subcultured on sheep blood agar. A DFA test of this isolate was positive, demonstrating the presence of F1 antigen, which is unique to Y. pestis. The isolate was confirmed as Y. pestis by lysis with a Y. pestis temperature–specific bacteriophage (15). Additionally, the isolate was urease negative. Biotyping (glycerol fermentation and nitrate reduction) of the soil and mountain lion isolates indicated biovar orientalis. Of the 3 remaining mice, 1 became moribund after 7 days and was euthanized; 2 did not become moribund and were euthanized 21 days postexposure. Culture of the necropsied tissues yielded no additional isolates of Y. pestis. Pulsed-field gel electrophoresis (PFGE) typing with AscI was performed with the soil isolate, the isolate recovered from the mountain lion, and the isolate obtained from the case-patient (16). The PFGE patterns were indistinguishable, showing that the Y. pestis in the soil originated through contamination by this animal (Figure). Although direct plating of the soil followed by quantification of CFU would have been useful for assessing the abundance of Y. pestis in the soil, this was not possible because numerous contaminants were present in the soil. It is unclear by what mechanism Y. pestis was able to persist in the soil. Perhaps the infected animal’s blood created a nutrient-enriched environment in which the bacteria could survive. Alternatively, adherence to soil invertebrates may have prolonged bacterial viability (17). The contamination occurred within a protected rock outcrop that had limited exposure to UV light and during late October, when ambient temperatures were low. These microclimatic conditions, which are similar to those of burrows used by epizootic hosts such as prairie dogs, could have contributed to survival of the bacteria. These results are preliminary and do not address 1) the maximum time that plague bacteria can persist in soil under natural conditions, 2) possible mechanisms by which the bacteria are able to persist in the soil, or 3) whether the contaminated soil is infectious to susceptible hosts that might come into contact with the soil. Answers to these questions might shed light on the intriguing, long-standing mysteries of how Y. pestis persists during interepizootic periods and whether soil type could limit its geographic distribution. From a public health or bioterrorism preparedness perspective, answers to these questions are necessary for evidence-based recommendations on bioremediation after natural or intentional contamination of soil by Y. pestis. Previous studies evaluating viability of Y. pestis on manufactured surfaces (e.g., steel, glass) have shown that survival is typically <72 hours (18). Our data emphasize the need to reevaluate the duration of persistence in soil and other natural media. Dr Eisen is a service fellow in the Division of Vector-Borne Infectious Diseases, Centers for Disease Control and Prevention, Fort Collins. Her primary interest is in the ecology of vector-borne diseases. We thank L. Chalcraft, A. Janusz, R. Palarino, S. Urich, and J. Young for technical and logistic support. - Barnes AM. Conference proceedings: surveillance and control of bubonic plague in the United States. Symposium of the Zoological Society of London. 1982;50:237–70. - Gage KL, Kosoy MY. Natural history of plague: perspectives from more than a century of research. Annu Rev Entomol. 2005;50:505–28. - Drancourt M, Houhamdi L, Raoult D. Yersinia pestis as a telluric, human ectoparasite-borne organism. Lancet Infect Dis. 2006;6:234–41. - Eisen RJ, Bearden SW, Wilder AP, Montenieri JA, Antolin MF, Gage KL. Early-phase transmission of Yersinia pestis by unblocked fleas as a mechanism explaining rapidly spreading plague epizootics. Proc Natl Acad Sci U S A. 2006;103:15380–5. - Webb CT, Brooks CP, Gage KL, Antolin MF. Classic flea-borne transmission does not drive plague epizootics in prairie dogs. Proc Natl Acad Sci U S A. 2006;103:6236–41. - Cherchenko II, Dyatlov AI. Broader investigation into the external environment of the specific antigen of the infectious agent in epizootiological observation and study of the structure of natural foci of plague. J Hyg Epidemiol Microbiol Immunol. 1976;20:221–8. - Pollitzer R. Plague. World Health Organization Monograph Series No. 22. Geneva: The Organization; 1954. - Bazanova LP, Maevskii MP, Khabarov AV. An experimental study of the possibility for the preservation of the causative agent of plague in the nest substrate of the long-tailed suslik. Med Parazitol (Mosk). 1997; ( - Achtman M, Zurth K, Morelli G, Torrea G, Guiyoule A, Carniel E. Yersinia pestis, the cause of plague, is a recently emerged clone of Yersinia pseudotuberculosis. Proc Natl Acad Sci U S A. 1999;96:14043–8. - Brubaker RR. Factors promoting acute and chronic diseases caused by yersiniae. Clin Microbiol Rev. 1991;4:309–24. - Perry RD, Fetherston JD. Yersinia pestis—etiologic agent of plague. Clin Microbiol Rev. 1997;10:35–66. - Baltazard M, Karimi Y, Eftekhari M, Chamsa M, Mollaret HH. La conservation interepizootique de la peste en foyer invetere hypotheses de travail. Bull Soc Pathol Exot. 1963;56:1230–41. - Mollaret H. Conservation du bacille de la peste durant 28 mois en terrier artificiel: demonstration experimentale de la conservation interepizootique de las peste dans ses foyers inveteres. CR Acad Sci Paris. 1968;267:972–3. - Mollaret HH. Experimental preservation of plague in soil [in French]. Bull Soc Pathol Exot Filiales. 1963;56:1168–82. - Chu MC. Laboratory manual of plague diagnostics. Geneva: US Centers for Disease Control and Prevention and World Health Organization; 2000. - Centers for Disease Control and Prevention. Imported plague—New York City, 2002. MMWR Morb Mortal Wkly Rep. 2003;52:725–8. - Darby C, Hsu JW, Ghori N, Falkow S. Caenorhabditis elegans: plague bacteria biofilm blocks food intake. Nature. 2002;417:243–4. - Rose LJ, Donlan R, Banerjee SN, Arduino MJ. Survival of Yersinia pestis on environmental surfaces. Appl Environ Microbiol. 2003;69:2166–71. Suggested citation for this article: Eisen RJ, Petersen JM, Higgins MS, Wong D, Levy CE, Mead PS, et al. Persistence of Yersinia pestis in soil under natural conditions. Emerg Infect Dis [serial on the Internet]. 2008 Jun [date cited]. Available from http://wwwnc.cdc.gov/eid/article/14/6/08-0029 Comments to the Authors Comments to the EID Editors Please contact the EID Editors via our Contact Form. - Page created: July 09, 2010 - Page last updated: July 09, 2010 - Page last reviewed: July 09, 2010 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:fc748d0e-4d8d-49db-b3fd-5cf3a227561e>
seed
Volume 6, Number 6—December 2000 International Editor's Update For 20 years after the end of World War II, infectious diseases were endemic throughout Japan, which served during this first postwar phase almost as a museum of communicable diseases. Improvements in socioeconomic conditions, infrastructure (especially water and sewerage systems), and nutrition, brought about a rapid reduction in rates of acute enteric bacterial and parasitic infections. The development and clinical application of antibiotics also contributed to this decrease. During the second postwar period (1965-1985), further advancement in the use of antibiotics led to control of acute enteric bacterial diseases. However, medical advances such as cancer chemotherapy and organ transplantation, along with an increasing elderly population, created a large immunocompromised population and widespread opportunistic infections. The development of new antibiotics was followed by the emergence of pathogens resistant to drugs. Since 1975, chemicals used in agriculture have been reevaluated to exclude toxic substances; however, decreased use of chemicals in agriculture has led to the reappearance or emergence of ticks and the rickettsia they transmit. In the third postwar period (1985-present), increased international travel has led to an increase in imported infectious diseases. Travelers returning from other Asian countries and other continents have become ill with foodborne and insect-borne infections, including shigellosis, cholera, and typhoid fever; several thousand cases are reported each year. In addition, contaminated imported foods have been responsible for sporadic illnesses or small outbreaks. Misuse or overuse of antibiotics has led to the emergence of methicillin-resistant Staphylococcus aureus, penicillin-resistant Streptococcus pneumoniae, fluoroquinolone-resistant Pseudomonas aeruginosa, and vancomycin-resistant enterococci. All hospitals in Japan must now be alert to nosocomial infections caused by these drug-resistant pathogens. The most important public health problems in modern Japan are massive outbreaks of acute enteric bacterial diseases. These outbreaks are caused by foods prepared commercially on a large scale for school lunches and chain stores. Contamination in a single aspect of preparation has resulted in large single-source foodborne outbreaks. More than 20,000 cases of infections caused by vibrios, Staphylococcus, pathogenic Escherichia coli, and Campylobacter have been reported in the past 5 years. Concerning viral diseases, immunization programs against measles, rubella, and mumps have been mounted, in addition to the successful campaign against polio in the mid-1970s. However, except for polio, the coverage rate for individual vaccines is lower than rates in the United States and Europe, and vaccine-preventable viral illnesses remain at unsatisfactory levels. Viral diarrheal enteritis transmitted through foods such as oysters has also been increasing. Trends in infectious diseases have changed rapidly in Japan during the past 50 years. Three reports are included in this issue that update the status of tuberculosis, flavivirus infection, and antibiotic resistance in Japan. Suggested citation: Kurata T. International Editor's Update. Emerg Infect Dis [serial on the Internet]. 2000, Dec [date cited]. Available from http://wwwnc.cdc.gov/eid/article/6/6/00-0601 - Page created: December 17, 2010 - Page last updated: December 17, 2010 - Page last reviewed: December 17, 2010 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:efc8dc1b-c028-4dd0-aa44-9aaf7aa11624>
seed
August 22, 2014 - Ebola is the cause of a viral hemorrhagic fever disease. - Currently, there are no FDA-approved vaccines or drugs to prevent or treat Ebola. - Ebola does not pose a significant risk to the U.S. public. - Treatment: CDC recommends supportive therapy for patients as the primary treatment for Ebola. This includes balancing the patient’s fluids and electrolytes, maintaining their oxygen status and blood pressure and treating them for any complicating infections. - Investigational Products: While there are experimental Ebola vaccines and treatments under development, these investigational products are in the earliest stages of product development and have not yet been fully tested for safety or effectiveness. Small amounts of some of these experimental products have been manufactured for testing. Thus, very few courses of these experimental products are available for clinical use. The FDA hopes that these investigational products will one day serve to improve outcomes for Ebola patients. However, we expect that most, if not all, of the products in development will require administration in a carefully monitored healthcare setting, in addition to supportive care and rigorous infection control. - Fraudulent Products: Unfortunately, during outbreak situations, fraudulent products claiming to prevent, treat or cure a disease almost always appear. The FDA monitors for fraudulent products and false product claims related to the Ebola virus and takes appropriate action to protect consumers. Consumers who have seen these fraudulent products or false claims are encouraged to report them to the FDA. Information from FDA - August 20, 2014 – Responding to Ebola: The View From the FDA – As part of FDA’s expert commentary and interview series, Medscape spoke with FDA Acting Deputy Chief Scientist and Assistant Commissioner for Counterterrorism Policy Luciana Borio, MD, about the issue of compassionate use and FDA efforts to respond to the Ebola outbreak. - August 14, 2014 – FDA statement: FDA is advising consumers to be aware of products sold online claiming to prevent or treat the Ebola virus. Since the outbreak of the Ebola virus in West Africa, the FDA has seen and received consumer complaints about a variety of products claiming to either prevent the Ebola virus or treat the infection. - August 5, 2014 – FDA authorized the use of a diagnostic test developed by the U.S. Department of Defense (DoD) to detect the Ebola Zaire virus in laboratories designated by the DoD to help facilitate effective response to the ongoing Ebola outbreak in West Africa. The test is designed for use in individuals, including DoD personnel and responders, who may be at risk of infection as a result of the outbreak. Specifically, the test is intended for use in individuals with signs and symptoms of infection with Ebola Zaire virus, who are at risk for exposure to the virus or who may have been exposed to the virus. (See also: August 12, 2014 Federal Register notice from HHS: Declaration Regarding Emergency Use of In Vitro Diagnostics for Detection of Ebola Virus) Frequently Requested Links - Ebola Hemorrhagic Fever information from CDC (includes information on the outbreak, symptoms, transmission, prevention, diagnosis, and treatment) - HHS FAQ: Ebola Experimental Treatments and Vaccines (August 8, 2014) - Access to Investigational Drugs Outside of a Clinical Trial (Expanded Access, sometimes called “compassionate use”) - About Emergency Use Authorization - The FDA’s Drug Review Process: Ensuring Drugs Are Safe and Effective - The FDA’s role during situations like this involves sharing information about medical products in development as well as communicating our assessment of product readiness and clarifying regulatory pathways for development. - The FDA works with U.S. government agencies that fund medical product development, international partners and companies to help speed the development of medical products that could potentially be used to mitigate the Ebola outbreak. For example, the FDA is involved in an inter-agency working group led by theAssistant Secretary for Preparedness and Response (ASPR) / Biomedical Advanced Research and Development Authority (BARDA) to facilitate and accelerate development of potential investigation treatments for Ebola. - The FDA also works directly with medical product sponsors to clarify regulatory and data requirements necessary to move products forward in development as quickly as possible. While the FDA cannot comment on the development of specific medical products, it’s important to note that every FDA regulatory decision is based on a risk-benefit assessment of scientific data that includes the context of use for the product and the patient population being studied. - Under the FDA’s Emergency Use Authorization (EUA) mechanism, the agency can enable the use of an unapproved medical product, or the unapproved use of an approved medical product during emergencies, when, among other circumstances, there are no adequate, approved and available alternatives. An EUA is an important mechanism that allows broader access to available medical products. - Under certain circumstances, the FDA can also enable access for individuals to investigational products through mechanisms outside of a clinical trial, such as through an emergency Investigational New Drug (EIND) application under the FDA’s Expanded Access program. In order for an experimental treatment to be administered in the United States, a request must be submitted to and authorized by the FDA. The FDA stands ready to work with companies and investigators treating Ebola patients who are in dire need of treatment to enable access to an experimental product where appropriate. - Unfortunately, during outbreak situations, fraudulent products claiming to prevent, treat or cure a disease almost always appear. The FDA monitors for fraudulent products and false product claims related to the Ebola virus and takes appropriate action to protect consumers. Related: August 14, 2014 statement
<urn:uuid:a8250f48-9ab2-4a93-a264-a0f9a1b417c1>
seed
of executive functioning, processing speed, verbal fluency and verbal memory individuals diagnosed with Depression or Bipolar disorder were found to perform worse than control subjects (Neurology Reviews, April 2010). were physically active and used the Mediterranean diet showed a sixty percent lower risk for Alzheimer’s dementia during a five year period.Consuming a diet high in fruits, vegetables, legumes, cereal and fish was found to have a positive impact on a study of 1,880 elderly people living in Northern Manhattan, New York. The independent benefits of diet and remaining physically active were still present after adjustments for age, gender, ethnicity, genetic risk factors, caloric intake, body mass index, other diseases, smoking, depression, cognitive and social activities.The diet pattern while not fully explaining the better health of individuals who adhere to it, likely has some type of positive impact in combination with other favorable factors, (Neurology Today, September young stroke patients those younger than 45 years is going up significantly according to a population study based upon more than one million people over a 12 year period.The average age of the time of stroke dropped from 71.3 years in 1993 to 1994 to 70.9 years in 1999 to 68.4 in 2005.Over the same time period, the percentage of stroke patients younger than 45 years rose from 4.5 percent in 1993/1994 to 5.5 percent in 1999 and 7.3 percent in 2005.Risk factors were identified.Among those age 20 to 44 years, diabetes and coronary heart disease significantly increased between 1995 and 2005.There were similar although not significant increases in hypertension and high cholesterol (Cerebrovascular & Critical Care, March 2010). In the US, 1.5 to 2.0 million civilians sustained a traumatic brain injury each year.The use of progesterone demonstrated a fifty percent reduction in mortality in patients treated compared to placebo.Functional outcomes were improved and reduced disability seen in patients suffering from moderate traumatic brain injury. The earlier patients received the drug the better the outcome as a means to try to prevent the brain from swelling immediately after the injury and the cascade of injury that occurs after that time.Neurosurgery and Trauma, March anticonvulsant drugs used to treat seizures, bipolar disorder, mania, neuralgia; migraine and neuropathic pain, often used off label have the increased risk of suicidal ideation and behaviors.Risk was higher for younger and older individuals (Neurology Today, June 3, 2010). the decreasing solvent TCE (Trichloroethylene) has been significantly associated with an increased risk of PD.Men exposed to this substance had more than six times the rate of PD than their twins who did not have this exposure.(Neurology Today, June 5 2010). There is a potential role for smoking as an inducing factor in thrombus formation.The median age for stroke presentation for the smoking population was 65.5 and increased to 68 years for ex-smokers and 67.6 for non-smokers.The median age for TIA presentation was 56.7 for smokers, 72.2 for ex-smokers and 69.1 for non-smokers.Ex-smokers had higher rates of hypertension and dyslipidemia than current or non-smokers.(Neurology Review, April 2010). duration when completed on a habitual basis was associated with better performance on intellectual measures of perceptual reasoning and overall IQ.There were no significant associations found for working memory or processing speed IQ factors (Gruber et. al., 2010, Sleep Medicine,11). As a result of CBT patients exhibited significant decreases in the time it took to fall asleep, decreased wake periods after being asleep, decreased number of awakenings and increased sleep efficiency or the amount of time spent sleeping after initial sleep onset.Significant improvements were seen using behavioral treatment for insomnia in a pain population resulting in improvement in pain or lessened pain interfering with daily functioning (Jungquist, et al., 2010, Sleep Medicine, 11). problems were assessed of children between the ages of five to ten years by their parents.Reduced sleep as reported by the parents was found to be predictive of more delinquent behavior and concentration problems in their children.When parents reported that children were awake after initially falling asleep, this was also predictive of more pronounced daytime sleepiness. Greater daytime sleepiness was seen as related to the presence of social problems in the children.Consequently two factors were seen as affecting the daytime behavior of children, the total sleep time as well as the amount of time spent awake after initially falling asleep (Velten-Schurian et.al., 2010, Sleep Medicine, 11).
<urn:uuid:d116c0e7-471c-4f69-bc05-781761f2bcf4>
seed
This view shows enzymes only for those organisms listed below, in the list of taxa known to possess the pathway. If an enzyme name is shown in bold, there is experimental evidence for this enzymatic activity. |Superclasses:||Biosynthesis → Secondary Metabolites Biosynthesis → Phenylpropanoid Derivatives Biosynthesis → Coumarins Biosynthesis| Some taxa known to possess this pathway include : Melilotus albus Expected Taxonomic Range: Magnoliophyta A widespread group of phenolics in plants termed coumarins constitute lactones of phenylpropanoids with a 2H-benzopyran-2-one nucleus [Brown86a] [Seigler98]. At least 1000 natural occurring coumarins, among them about 300 simple coumarins have been found in many families of higher plants [Berenbaum91] with an especially high number of structural variations encountered in the Apiaceae [Seigler98]. The biosynthesis of the simplest member described in this pathway, i.e. coumarin represents both the specific compound and serves as an eponym of the entire compound class. Coumarin belongs to the most common coumarins in plants. The numerous pharmacological and physiological effects of coumarin and its more complex derivatives such as the furanocoumarins and prenylated coumarins have drawn significant interest of researchers across different scientific areas. Coumarins are known to exhibit anti-inflammatory as well as antioxidant activities and often serve as model compounds for synthetic drugs [Fylaktakidou04] [Curini06]. Moreover, extensive research into their pharmacological and therapeutic properties for many years has resulted in the acknowledgment of their therapeutic role in the treatment of cancer [Lacy04]. About This Pathway In contrast to most of the coumarins, which are biosynthesized through 4-coumaric acid and umbelliferone, the formation of coumarin occurs via 2-coumaric acid [Gestetner74]. In general phenylalanine and trans-cinnamic acid are considered the precursor for the coumarin biosynthesis but Stoker [Stoker62] also reported the formation of coumarin from cis-cinnamic acid. Although free coumarin is found in small amounts in plants their β-glucoside(s) is the predominant accumulating compound. The corresponding glucosyltransferase has been partially purified from and characterized in Melilotus albus [Kleinhofs67] [Poulton80]. Interestingly, the formed trans-2-coumarate β-D-glucoside was not accepted as substrate for the subsequent β-glucosidase reaction. The enzyme only catalyzed the cis-isomer, i.e. coumarinic acid β-D-glucoside (also referred to as bound coumarin - [Kosuge61a]) forming coumarinate [Kosuge61]. The way the isomerization occurs is not entirely resolved. While there is strong evidence that the trans-cis isomerization occurs spontaneously by means of UV-light [Kleinhofs66] [Haskins64] the existence of a light-induced isomerase enzyme system has not been ruled out. Stoker [Stoker64] presented evidence for the involvement of an isomerase system in this process and found that plants kept in daylight or in the dark did not significantly differ with regard to the amount of coumarin. The last step of the pathway is the spontaneous lactonization of coumarinate forming coumarin. The typical 'hay' smell of coumarin is only found when plants are injured. It has been established that the glucosylated coumarins accumulate in the vacuole while the β-glucosidase is located to the extraplasmatic space [Oba81]. Hence, the physical contact of the enzyme and its substrate (coumarin glucosides) only occurs after the breakup of the cell and its organelles. Coumarin itself is not a dead end product but is rather readily further metabolized [Kosuge59]. Brown86a: Brown SA (1986). "Biochemistry of plant coumarins." In: Recent advances in phytochemistry, Volume 20: The shikimic acid pathway. Conn EE (ed.), Plenum Press New York and London, 1986, 287-316. Fylaktakidou04: Fylaktakidou KC, Hadjipavlou-Litina DJ, Litinas KE, Nicolaides DN (2004). "Natural and synthetic coumarin derivatives with anti-inflammatory/ antioxidant activities." Curr Pharm Des 10(30);3813-33. PMID: 15579073 Haskins64: Haskins FA, Williams LG, Gorz HJ (1964). "Light-Induced Trans to Cis Conversion of beta-d-Glucosyl o-Hydroxycinnamic Acid in Melilotus alba Leaves." Plant Physiol 39(5);777-781. PMID: 16656000 Kosuge61: Kosuge T, Conn EE (1961). "The metabolism of aromatic compounds in higher plants. III. The beta-glucosides of o-coumaric, coumarinic, and melilotic acids." J Biol Chem 236;1617-21. PMID: 13753452 Lacy04: Lacy A, O'Kennedy R (2004). "Studies on coumarins and coumarin-related compounds to determine their therapeutic role in the treatment of cancer." Curr Pharm Des 10(30);3797-811. PMID: 15579072 Oba81: Oba K, Conn EE, Canut H, Boudet AM (1981). "Subcellular Localization of 2-(beta-d-Glucosyloxy)-Cinnamic Acids and the Related beta-glucosidase in Leaves of Melilotus alba Desr." Plant Physiol 68(6);1359-1363. PMID: 16662108 FraissinetTache98: Fraissinet-Tachet L, Baltz R, Chong J, Kauffmann S, Fritig B, Saindrenan P (1998). "Two tobacco genes induced by infection, elicitor and salicylic acid encode glucosyltransferases acting on phenylpropanoids and benzoic acid derivatives, including salicylic acid." FEBS Lett 437(3);319-23. PMID: 9824316 ©2014 SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025-3493
<urn:uuid:70268cda-32b6-4f85-9117-9fac4853ea3c>
seed
Controversy still exists over the exact contributions of the allantois and the ventral cloaca to urachal formation. Four anatomic urachal variants have been described, depending on the degree of urachal tubularization and the status of associated umbilical vessels. 1. Patent Urachus Failure of complete urachal lumen closure results in free communication between the bladder and the urachus, and urine leaks from the umbilicus. Lower urinary tract obstruction may also be a contributing factor. The diagnosis can be made at birth or soon thereafter, when the umbilical cord is ligated and urine drains from the umbilicus. A tumor-like protrusion from the umbilicus is frequently seen, and occasionally an umbilical hernia may also be present. A confirmation of the diagnosis can be obtained by analyzing the fluid for urea and creatinine or injecting methylene blue via a catheter into the bladder. Conversely, indigo carmine can be injected into the fistulous tract to look for a color change in the urine. A voiding cystourethrogram is important to rule out any lower tract obstruction, and it may also demonstrate the communication. Early treatment is recommended because umbilical excoriation, recurrent urinary infection, septicemia and stone formation may develop. Neither cauterization of the umbilical lumen alone nor simple ligation has yielded satisfactory results. Complete excision of the urachus and umbilicus with a cuff of bladder by an extraperitoneal approach is a standard method of treatment. In addition, if there is any lower urinary tract obstruction, this also requires treatment. 2. Urachal Cyst When the urachal lumen incompletely obliterates, there exists the potential for cystic development within this epithelial lined space. Most cysts develop in the lower third of the urachus. The cyst generally remains small and silent. Occasionally, it is felt as a midline lower abdominal mass. More often, symptoms are related to size or secondary infection. Septic cysts most commonly present in adults, but some have been reported in infants. These cysts then produce localized pain and sometimes inflammation in association with systemic symptoms. If left untreated, the abscess will often drain from the umbilicus or into the bladder. The differential diagnosis of a palpable uninfected cyst includes a bladder diverticulum, umbilical hernia and ovarian cyst. An infected cyst may be difficult to differentiate from acute appendicitis. Lower abdominal ultrasound and especially CT scans are excellent methods of confirming the diagnosis. An infected cyst is best treated by incision and drainage with subsequent excision. 3. Urachal Sinus A urachal sinus is probably the sequela of a small urachal cyst that became infected and dissected to the umbilicus. Rarely, it may drain into the bladder; the cyst position probably dictates the primary direction of drainage. The symptoms and treatment are similar to the other urachal anomalies already described. The diagnosis of a draining urachal sinus may be difficult to differentiate from an umbilical granuloma or umbilical sinus. A fistulogram may be helpful. 4. Vesicourachal Diverticulum Complete obliteration of the urachus at the umbilicus and incomplete closure at the bladder level may result in a vesicourachal diverticulum. Lower urinary tract obstruction may or may not be a related factor. This problem is usually discovered during radiologic evaluation for a urinary tract infection via a VCUG. Occasionally, stones have been detected within the diverticulum. RELATED UMBILICAL DISORDERS These result from incomplete closure of the omphalomesenteric duct. 1. Omphalomesenteric duct. This is extremely rare and may be recognized with fecal drainage noted from the umbilicus. It is more common in boys than in girls, and differentiation from urachal anomalies is important for the surgical approach. Confirmation is done through a fistulogram. 2. Partially patent omphalomesenteric duct. A. Omphalomesenteric duct sinus. B. Omphalomesenteric duct cyst. This can be diagnosed with fistulograms and require excision. 3. Meckel’s diverticulum. Persistence of the proximal portion of the omphalomesenteric duct as a diverticulum opening into the ileum is called a Meckel’s diverticulum. It may be associated with an umbilical polyp. 4. Umbilical polyp. Persistence of intestinal mucosa at the umbilicus can develop into an umbilical polyp. Probing and possibly a fistulogram are important. A simple polyp can be treated superficially with silver nitrate or local excision. It is important, however, to make sure that it is not associated with a duct remnant. Failure of the intestines to recede into the abdominal cavity by the end of the tenth week of gestation results in an omphalocele. About 50% of infants with an omphalocele have other congenital anomalies. 6. Umbilical hernia. This is usually congenital and relates to the incomplete closure of the anterior abdominal wall fascia after the intestines have returned to the abdominal cavity. Inflammation of the umbilicus B. Single umbilical artery Controversy still exists over the single umbilical artery as a barometer of other congenital anomalies. Certainly, the incidence of urinary tract abnormalities is not significantly increased in newborns with a single umbilical artery.
<urn:uuid:23fe57f7-0901-4205-9c4a-5b03f4d96cfe>
seed
PUTTING A FACE ON A CLASS OF VIRAL DEUBIQUITINATING ENZYMES March 12th, 2007 (l to r) Authors Wilhelm Weihofen, Rachelle Gaudet and Christian Schlieker Herpesviruses (members of the Herpesviridae family) are widespread pathogens, causing disease in humans and animals. The family name stems from the Greek herpein ("to creep"), referring to the latent, re-occurring infections caused by herpesviruses. Although half of us are infected by herpes simplex virus alone, the lucky majority will never experience any symptoms. During acute infection, viral pathogens commandeer host cells for their propagation. Accordingly, they have evolved to disable or subvert to their own advantage the cellular enzymatic machinery that could otherwise be deployed against them to mount antiviral immune response. For example, several laboratories including the Ploegh lab (Whitehead Institute at MIT and affiliated with MCB) have recently discovered that many viruses feature proteins that subvert the host’s ubiquitin system, which controls protein fate by means of mono- and poly-ubiquitination. While poly-ubiquitination is most commonly employed to target a protein for degradation by the proteasome, mono-ubiquitinated proteins are very often bound by proteins containing ubiquitin-binding domains to initiate cell signaling. Deubiquitination, on the other hand, can be used to revert both processes. Ubiquitination and deubiquitination are tightly controlled by a collection of target-specific host proteins. In this study, members of the Gaudet and Ploegh labs teamed up and showed that some herpesvirus-encoded cysteine proteases are not as picky as cellular deubiquitinating enzymes, since they indiscriminately cleave most ubiquitin molecules attached to host proteins (C. Schlieker, W. Weihofen, E. Frijns, L. Kattenhorn, R. Gaudet and H. Ploegh. Structure of a herpesvirus-encoded cysteine protease reveals a new class of deubiquitinating enzymes. Mol. Cell 2007). To reveal how these enzymes recognize and cleave ubiquitin from proteins, the murine cytomegalovirus cysteine protease was crystallized in complex with a ubiquitin-based suicide inhibitor, and the structure of the complex was determined by x-ray crystallography. The structure of the protease features a unique fold and mode of ubiquitin recognition when compared to known cellular deubiquitinating enzymes. The observed differences and the fact that the deubiquitinating activity of this protein is essential for the virus to sustain a productive infection could lead to the development of drugs targeted against herpesviruses. Furthermore, because this enzyme is specific for ubiquitinated substrates while so unspecific for the nature of the substrate, it might become a useful lab tool as a "ubiquitin razor".
<urn:uuid:f81e69c8-9d0f-42b8-b017-8b57a6f7b487>
seed
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) Afterhyperpolarization, or AHP, describes the hyperpolarizing phase of a neuron's action potential where the cell's membrane potential falls below the normal resting potential. This is also commonly referred to as an action potential's undershoot phase. AHPs have been segregated into "fast", "medium", and "slow" components that appear to have distinct ionic mechanisms and durations. While fast and medium AHPs can be generated by single action potentials, slow AHPs generally develop only during trains of multiple action potentials. During single action potentials, transient depolarization of the membrane opens more voltage-gated K+ channels than are open in the resting state, many of which do not close immediately when the membrane returns to its normal resting voltage. This can lead to an "undershoot" of the membrane potential to values that are more polarized ("hyperpolarized") than was the original resting membrane potential. Ca2+-activated K+ channels that open in response to the influx of Ca2+ during the action potential carry much of the K+ current as the membrane potential becomes more negative. The K+ permeability of the membrane is transiently unusually high, driving the membrane voltage VM even closer to the K+ equilibrium voltage EK. Hence, hyperpolarization persists until the membrane K+ permeability returns to its usual value. Medium and slow AHP currents also occur in neurons. The ionic mechanisms underlying medium and slow AHPs are not yet well understood, but may also involve M current and HCN channels for medium AHPs, and ion-dependent currents and/or ionic pumps for slow AHPs. - ↑ Purves et al., p. 37; Bullock, Orkand, and Grinnell, p. 152. - ↑ M. Shah, and D. G. Haylett. Ca2+ Channels Involved in the Generation of the Slow Afterhyperpolarization in Cultured Rat Hippocampal Pyramidal Neurons. J Neurophysiol 83: 2554-2561, 2000. - ↑ N. Gu, K. Vervaeke, H. Hu, and J.F. Storm, Kv7/KCNQ/M and HCN/h, but not KCa2/SK channels, contribute to the somatic medium afterhyperpolarization and excitability control in CA1 hippocampal pyramidal cells, Journal of Physiology 566:689-715 (2005). - ↑ R. Andrade, R.C. Foehring, and A.V. Tzingounis, Essential role for phosphatidylinositol 4,5-bisphosphate in the expression, regulation, and gating of the slow afterhyperpolarization current in the cerebral cortex, Frontiers in Cellular Neuroscience 6:47 (2012). - ↑ J.H. Kim, I. Sizov, M. Dobretsov, and H. Von Gersdorff, Presynaptic Ca2+ buffers control the strength of a fast post-tetanic hyperpolarization mediated by the a3 Na+/K+-ATPase, Nature Neuroscience 10:196-205 (2007). |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:c6f16d16-d16e-4711-877d-9d3cbcf9690b>
seed
Tesmer Lab Research Summary G protein-coupled receptors (GPCRs) are responsible for the sensations of sight and smell, for regulation of blood pressure and heart rate, and for many other cellular events. Signals impinging upon the exterior of the cell induce a conformational change in these GPCRs that allows them to activate heterotrimeric G proteins within the cell. The activated G proteins then bind to various effectors that initiate downstream cascades, leading to profound physiological change. We study the molecular basis of GPCR-mediated signal transduction, principally via the technique of X-ray crystallography. By determining atomic structures of signaling proteins alone and in complex with their various targets, we can provide important insights into the molecular basis of signal transduction and the disease states that emerge as a result of dysfunctional regulation of these pathways. G protein-coupled receptor kinases (GRKs) G protein-coupled receptor kinases (GRKs) are responsible for homologous desensitization of GPCRs, an adaptive process by which activated receptors are rapidly uncoupled from G proteins. The best-characterized member of this family is GRK2, also known as b-adrenergic receptor kinase 1. GRK2 is not only important for myocardiogenesis and regulation of heart contractility but also implicated in the progression of congestive heart failure. In 2003, we reported the atomic structure of GRK2 in a peripheral membrane complex with the heterotrimeric G protein Gbg. This was the first structure of a GRK and the first of Gbg in complex with a downstream effector enzyme. Subsequently, we have described the structure of GRK2 alone and the Gaq-GRK2-Gbg complex (2005). The latter structure was the first of Gaq and a Gaq-effector complex. Gaq is a heterotrimeric G protein involved in smooth muscle function, in regulation of blood pressure and in maladaptive cardiac hypertrophy. The Gaq-GRK2-Gbg structure also revealed the first glimpse of how activated heterotrimeric G proteins can be arranged at the membrane during active signal transduction and how Ga and Gbg subunits can simultaneously interact with a single effector target. We are also interested in the molecular and biochemical differences between different classes of GRKs. The seven GRKs found in the human genome are classified into three families: GRK2/3, which are ubiquitously expressed; GRK1/7, which play specific roles in phototransduction; and GRK4/5/6, which are all ubiquitously expressed except for GRK4. A distinguishing feature of these families is the structure of their C-terminal domains. We have determined the atomic structure of GRK6 in complex with AMPPNP, a non-hydrolyzable nucleotide analog, as a representative of the GRK4/5/6 family. GRK6 is involved in motor neuron function and thus is a potential drug target for the treatment of Parkinson’s disease. To examine the GRK1/7 family, we have determined the structure of GRK1 in complex with ADP and ATP, as well as in its apo form. GRK1, also known as rhodopsin kinase, regulates the amplitude of the light response in rod cells. One important result from these studies has been to provide what so far has been elusive with GRK2: models of a GRK in different ligand states and the resolution of structural elements believed to be involved in binding GPCRs. The most well established physiological targets of GRKs are activated GPCRs. GRKs are unique among protein kinases for their ability to recognize only the active form of the receptor. Thus, we believe that GRKs can be used to trap the activated state of a GPCR. Understanding the structure of a GPCR in its activated state is one of the holy grails of modern pharmacology. Over the course of our studies, we have developed a toolbox of different GRKs that we can produce in abundance and use to probe the molecular determinants of GRK-receptor interaction. Specifically, we are studying how GRK2 interacts with the squid photoreceptor, a Gaq-coupled receptor, and GRK1 with its physiological target rhodopsin. While a crystal structure of these complexes is the most important goal, we are also defining the GPCR binding sites on GRKs with site directed mutagenesis and biochemical assays, cross-linking studies, and co-crystal structures of the intracellular loops of the receptor with GRKs. These studies will further help us define the molecular architecture of signaling complexes that assemble around activated GPCRs. Because of the therapeutic potential of inhibiting GRK function, we are also investigating the structure of GRK in complex with various inhibitors. For example, we are currently solving the structure of an RNA aptamer that binds to GRK2 with 10-100 nM affinity. High-resolution models of this complex and other inhibited GRKs would facilitate the design of new molecular tools and therapeutic leads for the treatment of cardiovascular disease. Heterotrimeric G Protein Regulated Rho Guanine Nucleotide Exchange Factors (RhoGEFs) GPCRs are also known to be involved in cell transformation, cancer progression and metastasis. One pathway by which this occurs is through the activation of RhoA, a key regulator of cytoskeletal structure and gene transcription. Recently, two families of enzymes responsible for linking these GPCRs to RhoA have been identified. The first family is activated by the G protein Ga12/13 and is critical for platelet activation during wound repair. Of this group, our lab has been studying leukemia-associated RhoGEF (LARG), one of the few RhoGEFs known to be directly responsible for a human cancer. We have determined structures of the catalytic DH/PH domains of LARG alone and in complex with its substrate RhoA, and are in the process of analyzing LARG function using site-directed mutants and either fluorescence-polarization or FRET-based nucleotide exchange assays. We have also determined atomic structures of activated Ga12 and deactivated Ga13 subunits. Future goals are to determine atomic structures of larger fragments of LARG, their complexes with either Ga12 or Ga13, and thereby elucidate the mechanism by which LARG mediates signal transduction from Ga13 to RhoA. RhoA is also activated by Gaq-coupled receptors via a second family of enzymes represented by p63RhoGEF. We recently published the structure of the Gaq-p63RhoGEF-RhoA complex, capturing a snapshot of three nodes of a signal transduction cascade connecting heterotrimeric to small molecular G proteins. Together with the Wieland lab (U. of Heidelberg) and the Miller Lab (Oklahoma Medical Research Foundation), we showed that this pathway is conserved from nematode to humans and that there exists in humans a family of RhoGEFs related to p63RhoGEF that respond to hormones impinging on Gaq-coupled receptors. This family is expected to be at least partly responsible for maladaptive events that occur during heart disease such as cardiac hypertrophy. Current research efforts in the lab are to understand the mechanism by which Gaq activates p63RhoGEF using site-directed mutagenesis and cell-based assays.
<urn:uuid:791cbe2a-2bad-4c9d-9d39-b3a292999151>
seed
The involvement of Propionibacterium acnes in the pathogenesis of acne is controversial, mainly owing to its dominance as an inhabitant of healthy skin. This study tested the hypothesis that specific evolutionary lineages of the species are associated with acne while others are compatible with health. Phylogenetic reconstruction based on nine housekeeping genes was performed on 210 isolates of P. acnes from well-characterized patients with acne, various opportunistic infections, and from healthy carriers. Although evidence of recombination was observed, the results showed a basically clonal population structure correlated with allelic variation in the virulence genes tly and camp5, with pulsed field gel electrophoresis (PFGE)- and biotype, and with expressed putative virulence factors. An unexpected geographically and temporal widespread dissemination of some clones was demonstrated. The population comprised three major divisions, one of which, including an epidemic clone, was strongly associated with moderate to severe acne while others were associated with health and opportunistic infections. This dichotomy correlated with previously observed differences in in vitro inflammation-inducing properties. Comparison of five genomes representing acne- and health-associated clones revealed multiple both cluster- and strain-specific genes that suggest major differences in ecological preferences and redefines the spectrum of disease-associated virulence factors. The results of the study indicate that particular clones of P. acnes play an etiologic role in acne while others are associated with health. Everybody knows acne and 80% of us are or were affected by this disease, with more or less severe consequences for our well-being.MAKE A DONATION We and other research teams have compared the skin microbiota (i.e. the microorganisms colonizing human skin) of healthy and acne-affected skin and found that certain types of the predominant bacterium Propionibacterium acnes are acne-associated whereas other types are associated with healthy skin. This important finding will be exploited in our proposed project to treat and cure acne. see: scientific strategy The results of the study indicate that particular types of P. acnes play an etiologic role in acne while others are associated with health. Be your own researcher! Since acne is such a common disease we invite our funders to actively take part in our research efforts. Many people have suffered from acne in their adolescence, and some might have developed ideas for anti-acne treatment. This "crowdscience" project invites individuals to step forward to share their ideas. We will award the best ideas: 3 project ideas will be tested in our laboratories with the appropriate technology and our know-how, including a skin cell culture model and tests on modulating effects on the skin microbiota.
<urn:uuid:461c461f-c56b-4800-a826-84a9cbed2af6>
seed
Digital tomosynthesis creates a three-dimensional picture of the breast using x-rays. Several low-dose images from different angles around the breast are used to create the final 3-D picture. A mammogram creates a two-dimensional image of the breast from two x-ray images of each breast. Digital tomosynthesis is approved by the U.S. Food and Drug Administration, but is not yet considered the standard of care for breast cancer screening. Because it is relatively new, it is available at a limited number of hospitals. A study has found that when radiologists looked at digital tomosynthesis images along with digital mammogram images, they were more accurate and had lower false positive recall rates compared to radiologists who looked only at digital mammograms. A false positive is an abnormal area that looks like cancer on a mammogram, but turns out to be normal. Besides worrying about being diagnosed with breast cancer, a false positive means more tests and follow-up visits, which can be stressful. The research was published online on Nov. 20, 2012 by Radiology. Read the abstract of “Assessing Radiologist Performance Using Combined Digital Mammography and Breast Tomosynthesis Compared with Digital Mammography Alone: Results of a Multicenter, Multireader Trial.” The research was made up of two studies. In the first study, 12 radiologists looked at breast images from 312 women. In the second study, 15 radiologists looked at breast images from 310 women. All the radiologists had more accurate diagnoses when they looked at both digital mammograms and digital tomosynthesis compared to looking only at digital mammograms: - radiologists were about 11% more accurate in correctly identifying any cancer in the breast in study one - radiologists were about 16% more accurate in correctly identifying any cancer in the breast in study two Adding digital tomosynthesis to digital mammograms also reduced the number of false positives found by all the radiologists: - false positive recall rates dropped by nearly 39% in study one and by about 17% in study two While the results of this small study are very promising, more research needs to be done before digital tomosynthesis becomes part of routine breast cancer screening. Because it is another imaging test, digital tomosynthesis exposes women to additional radiation. Researchers are looking at ways to replace a standard mammogram image with one created from digital tomosynthesis images to reduce radiation exposure. Visit the Breastcancer.org Digital Tomosynthesis page to learn more about how the test is done and how it’s different from a mammogram.
<urn:uuid:eb1f8fcb-1e0f-4122-84c0-df25b70a35b2>
seed
Am Fam Physician. 1999 Apr 15;59(8):2331-2332. Head trauma in children results in 600,000 emergency department visits and 95,000 hospital admissions per year. It is likely that many more such children are evaluated in physicians' offices. Predicting which children require diagnostic imaging can be difficult, and no established guidelines are in place to direct physicians who care for pediatric patients with head trauma. Published guidelines are based on limited clinical data and are not followed uniformly in practice; in addition, they generally do not specify which imaging technique is preferred. Gruskin and Schutzman performed a retrospective study to determine the incidence of skull fracture and intracranial injury in children who presented to a pediatric emergency department. They also attempted to determine which historic features and physical findings predict complications of head injury and whether clinical criteria could aid in the selection of diagnostic imaging. Medical records were reviewed for children younger than two years of age who were discharged from a Boston children's hospital with a diagnosis of head injury, skull fracture, intracranial injury, cerebral contusion or cerebral edema. Excluded from the study were children with a history of seizures, blood dyscrasias, neurologic disorders, ventricular shunts or suspected abuse. Historic information included the estimated height of the fall, level of consciousness and presence of scalp abnormalities, and whether the child was referred by another physician or came directly to the emergency department. When skull radiographs or cranial computed tomographic (CT) scans were obtained, these results were also noted. Children were diagnosed with a “minor head injury” if they had a normal neurologic examination and were alert at discharge, and if radiologic studies were normal. A total of 291 patients were evaluated; medical records were available for 278 patients (96 percent). Most of these children had gone directly to the emergency department. Approximately 60 percent of the children were younger than 12 months of age; 40 percent were between 13 and 24 months of age. Eighty-two percent of all children were ultimately given a diagnosis of minor head injury, and 18 percent were diagnosed with a skull fracture or an intracranial injury. However, the incidence of skull fracture/intracranial injury was 29 percent in children younger than 12 months of age and 4 percent in those older than 12 months of age. An increase in the height of the fall was associated with a higher incidence of serious injury, although low height of fall did not rule out a diagnosis of skull fracture or intracranial injury. The incidence of seizure, emesis, behavior changes and loss of consciousness did not differ significantly between children found to have a minor head injury and children diagnosed with skull fracture/intracranial injury. Of those determined to have a minor head injury, 29 percent exhibited a behavior change, and 11 percent had emesis. There was a very high incidence (94 percent) of skull fracture/intracranial injury associated with scalp abnormality. Depressed level of consciousness and presence of skull fracture/intracranial injury were significantly correlated, but 92 percent of children with an isolated skull fracture and 75 percent with intracranial injury had a normal level of consciousness and a nonfocal neurologic examination. The authors conclude from their data that the incidence of serious head injury from a fall is greatest in children younger than 12 months of age. Many children who apparently have minor falls may have sustained significant head injury, even in the absence of clinical signs and symptoms. Physicians should have a low threshold for ordering imaging studies in children who have fallen. A CT scan could appropriately be ordered, but a skull radiograph may be acceptable in some situations because it is easier to perform and does not require sedation. The authors report that children who fall 3 ft or less and have normal results on scalp examination and no history of neurologic symptoms do not need radiologic evaluation. Gruskin KD, Schutzman SA. Head trauma in children younger than 2 years. Are there predictors for complications? Arch Pediatr Adolesc Med. January 1999;153:15–20. editor's note: This study seems to dispel the notion held by many physicians that if a child does not lose consciousness, the chance of a serious head injury is very small. In addition, many children with minor head injuries may exhibit behavior changes and vomiting. It appears that exercising clinical judgment and maintaining close follow-up is still prudent. In addition, there should be a very low threshold for ordering a cranial CT scan in a child under one year of age who has fallen. Obviously, more studies are needed to better define historic and clinical criteria for diagnostic imaging.—j.t.k. Copyright © 1999 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact [email protected] for copyright questions and/or permission requests. Want to use this article elsewhere? Get Permissions
<urn:uuid:1b5ac462-b660-4ed0-8b4a-71d813f74b46>
seed
What is the Apgar score and why is it done? What if my baby’s skin is slightly yellow? Find out what to expect in those early hours and days after your baby is born. From the moment your newborn’s head comes out of the birth canal, your medical team will evaluate and care for your child. You may not notice much of the care your baby receives. But it is vital to ensure your baby's safe move to the outside world. After the baby is delivered, bulb suction is used to clear mucus from your baby’s airway. As soon as it is clear, you will hear your baby's first cry. Shortly after, the umbilical cord is clamped and cut. If your baby is healthy, your partner can cut the cord, if desired. The baby is then dried and placed on your tummy for a greeting. A blanket may be used to keep the baby warm. Maintaining body temperature is important for both you and the baby. A clean up and evaluation will be done. First is a visual check for any deformities. Next is the Apgar score. This is a measure of the baby’s condition based on color, heart rate, respiration, reflex responses, and muscle tone. A score of 0, 1, or 2 is given for each of the five criteria. The criteria are explained in the table below. A score is given at one minute after birth and again five minutes after birth. A sick baby may be evaluated again at 10 minutes after birth. A total score of 7-10 is normal; 4-6 is intermediate; and 0-3 is low. The evaluation continues with an estimation of gestational age. Babies younger than 37 weeks, older than 42 weeks, or with a weight inappropriate for their age may need special care. Ten minutes after birth, some babies will have a tube passed through their nose and into their stomach. Babies who need this exam include babies who are born: To help protect young eyes, the baby receives eye drops or an antibiotic ointment. Your baby will also be given an injection of vitamin K. A deficiency of vitamin K can cause hemorrhagic disease of the newborn. This is a serious disease of excess bleeding. The umbilical cord is treated with a solution to prevent infection. The baby is carefully swaddled. A knit hat is placed on his or her head to maintain body temperature. If the baby’s temperature drops below 96 degrees Farenheit (35.5 degrees Celsius), he or she will be placed in an infant warmer. The baby will be returned to you for cuddling as soon as possible. If you plan to breastfeed, you are encouraged to start now. While you feed, take care to keep both you and your baby warm. Your partner is encouraged to join in the baby cuddling. Monitoring and Evaluation After delivery, you can send your baby to the nursery so you can sleep. Or, you can keep the baby in a bassinet in your room. About every eight hours, your postpartum nurse will check your baby’s vital signs. These are temperature, heart rate, and breathing rate. When your baby has fed at least once and has normal vital signs, she will be given a bath. A mild soap is used that will not remove all of the baby’s natural antibacterial protection. The baby gets this protection from the whitish greasy material (vernix) that covers most of his or her body. Within twelve hours, your baby will have a full exam from the hospital doctor or your pediatrician. This includes measurement of weight, length, and head circumference. The major organs, such as heart, lungs, skin, and others, are closely examined. Screening tests are done on healthy babies to identify health issues before any symptoms start. Newborn Screening Tests Newborn screening tests check for diseases that can appear early in life. These diseases are not common, but they can cause serious damage if they are not treated. For these tests, blood is drawn from the baby’s heel within the first 24 hours of life. Your state’s health department decides which diseases are screened in your state. All states screen for hypothyroidism and phenylketonuria (PKU). Both of these conditions can cause intellectual disability if they are not treated. Many states also test for the following: If your baby tests positive, you will be notified. A second test will be done to be sure it is not a false positive. If your baby tests negative, you will not be notified. Your doctor will receive a copy of the results either way. Some hospitals will check your baby’s hearing. This test is painless and can be done while your baby is sleeping. It takes only a few minutes. You will have immediate results. If your baby passes the test, there is no hearing problem at this time. If your baby does not pass, further testing is advised. Oxygen Saturation Screening Oxygen saturation refers to the amount of oxygen in your baby’s blood. The oxygen saturation level is a measure of how well your baby’s heart and lungs are working. A tiny red light is attached to the outside of your baby’s hand, foot, or wrist. A cord attaches this light to a machine that records the amount of oxygen being carried by blood cells. The measure will be done at least three times. Ideally, the level will be greater than 94%. If the level is 94% or lower, the doctor will order further tests, such as blood pressure, electrocardiogram, chest x-ray, or echocardiogram. You will also be referred to a pediatric cardiologist. Some babies have a slight yellow tinge to their skin and eyeballs. This is a sign of jaundice. Jaundice is an excess of bilirubin in the blood. Bilirubin is a pigment that is normally cleared from the blood by the liver. A newborn’s liver is still learning how to remove bilirubin. Many babies may appear jaundiced around the second to fifth day of life. Babies who are breastfed may develop jaundice if they do not get enough milk. This condition usually clears within two weeks without treatment. Moms are encouraged to feed often so that the baby will have more bowel movements. Bilirubin leaves the body in the stool. If treatment is necessary, the baby is placed under artificial light. The light breaks down bilirubin in the baby’s skin. In rare cases, prolonged jaundice may be a sign of something more serious. If your baby is a boy, you may like to have him circumcised. This can be done after he has urinated at least one time and is feeding well. The baby is given local anesthesia and the procedure is quick. Take advantage of your time in the hospital. The nurses can help you with feeding, diaper changing, bathing, and other caretaking duties. They can answer any questions and provide support. Most moms are discharged from the hospital two to three days after giving birth. After you are home, the medical support does not end. Call your pediatrician’s office or the maternity ward if you have any questions. You’ll bring your baby to her pediatrician for her one-week appointment. This is usually called a “well-baby checkup.” You will have these checkups regularly during the first year. It is normal for your baby to lose weight. Most newborns lose 5%-7% of their birth weight within the first few days of life. Breastfed babies gain this back by two weeks of life. Formula fed babies often regain their weight sooner. You will need to tend to your baby’s umbilical cord. Each time you change a diaper, examine the cord for signs of redness or drainage. These could signal an infection. Once a day, apply 70% alcohol to the cord. The alcohol helps dry up the cord and reduces the risk of infection. Caring for Your Baby Some parents are a bit overwhelmed in those first days or weeks home from the hospital. Try to stay calm, trust your instincts, and ask for help when you need it. There are many guidelines for how to care for your baby, but it is not an exact science. As long as you provide your baby with warmth, love, food, and cleanliness, you’re doing your job. With time and patience, you and your baby will figure each other out. Remember to enjoy this time. Despite those nights that seem unending, these early weeks will go by too quickly. - Reviewer: Andrea Chisholm, MD - Review Date: 03/2014 - - Update Date: 04/30/2014 -
<urn:uuid:d86f2c26-64de-432d-ac0d-84eb3b3999b7>
seed
"Fluorescence tools that provide high-resolution fluorescence pictures are likely to provide more reliable scores than fluorescence devices that assess via a single spot," wrote the study authors, from the University of California, San Francisco School of Dentistry. "The better visibility of the high-resolution fluorescence imaging could prevent unnecessary operative interventions." They compared several light-based diagnostic modalities -- including fiber-optic transillumination, optical coherence tomography, and fluorescence diagnostic tools -- with the "gold standard" for caries detection: the International Caries Detection and Assessment System (ICDAS II). "In order to easily apply the CAMBRA principles ... it is useful to introduce state-of-the-art sensitive caries diagnostic tools into the dental office armamentarium," the study authors wrote. "If caries lesions are detected early enough in a precavitated stage, intervention methods such as fluoride application, sealants, preventive resin restorations, laser treatment, and antibacterial therapy can be applied to reverse the caries process." For this study, the researchers recruited 100 patients (58 females, 42 males; average age 23.4 ± 10.6 years) presenting with 433 occlusal, unfilled surfaces of posterior teeth (90 bicuspids, 343 molars). Two examiners independently evaluated the patients' ICDAS II scores. They scored 110 fissure areas as sound (ICDAS II code 0), 450 as ICDAS II code 1 (mineral loss in the base of a fissure), and 314 as ICDAS II code 2 (mineral loss extended from the base). Another 107 cases were scored as ICDAS II code 3 (early cavitation with first visual enamel breakdown), while 26 cases were assigned ICDAS II code 4 and/or code 5 (more progressed carious lesions). Fluorescence methods compared Using the Diagnodent (KaVo Dental), the Spectra optical caries detector (Air Techniques), the SoproLife (Acteon), and a quantitative light fluorescence (QLF) research tool, the examiners then evaluated up to five fissure areas on each tooth, for a total of 1,066 areas of interest for each system. The Diagnodent uses red laser light (655 nm) to illuminate regions of a tooth; the emitted light is channeled through the handpiece to a detector, and the device then displays a digital number (1-99) and emits a beeping sound. A higher number indicates more fluorescence and thus a more extensive lesion beneath the surface; a value of 5-10 indicates initial caries in enamel, 10-20 indicates initial caries in dentin, and greater than 20 indicates caries in dentin. In this study, the researchers recorded Diagnodent values between 0 and 10 for 424 of the evaluated pit-and-fissure areas, followed by 291 spots with values between 11 and 20. The remaining 326 measurements showed values between 21 and 99, including 31 areas with a Diagnodent value of 99. The Spectra device utilizes fluorescence from light-emitting diodes in the 405-nm wavelength. When the light is projected onto the tooth surface, cariogenic bacteria fluoresce red, while healthy enamel fluoresces green. An on-screen picture of the tooth includes false coloring and a number scale intended to predict the caries depth: 1.0-1.5 means early enamel caries, 1.5-2.0 is deep enamel caries, 2.0-2.5 is dentin caries, and 2.5 or above signifies deep dentin caries. For this study, a Spectra value of 0 was observed 114 times, while values between 1.0 and 1.9 were displayed 739 times, a value of 2.0 to 2.9 occurred 172 times, and a value of 3.0 to 3.9 was seen in 14 instances (3.9 was the highest value measured). The SoproLife system combines the advantages of a visual inspection method with a high-magnification oral camera and laser fluorescence device. In "daylight mode," white LEDS illuminate the tooth, while in "fluorescence mode" the excitation results from four blue LEDs at 450 nm. In order to classify caries lesions in early stages using the Soprolife, the study authors developed a new scale, where daylight and fluorescence pictures for occlusal fissure areas were each categorized into six different groups, code 0 to code 5. In daylight mode, code 0 is given for sound enamel with no changes in the fissure area. Code 1 is applied if the center of the fissure shows whitish or slightly yellowish change in the enamel. In code 2, the whitish change is wider and extends to the base of the pit-and-fissure system and comes up the slopes of the fissure system in the direction of the cusps. In code 3, fissure areas are rough and slightly open, indicating the beginning of enamel breakdown. In code 4, the caries process is no longer confined to the fissure width, and in code 5 there is obvious enamel breakdown with visible dentin. The scoring is slightly different in SoproLife's blue fluorescence mode. Fluorescence mode code 0 is given when the fissure appears shiny green and the enamel appears sound with no visible changes. Code 1 is selected if a tiny, thin red shimmer in the pit-and-fissure system is observed, which can slightly come up the slopes of the fissure system. In code 2, darker red spots confined to the fissure are visible. For code 3, dark red spots extend as lines into the fissure areas, but are still confined to the fissures. If the dark red (or red-orange) extends wider than the confines of the fissures, a code 4 is assigned. Code 5 is selected if obvious openings of enamel are seen with visible dentin. For this study, and using this new scoring system, in daylight mode the examiners scored 142 pit-and-fissure areas as code 0, 436 as code 1, 165 as code 2, 138 as code 3, 96 as code 4, and 89 as code 5. In fluorescence mode, they scored 242 areas as code 0, 263 as code 1, 224 as code 2, 133 as code 3, 121 as code 4, and 81 as code 5. Finally, QLF uses a 370-nm light source to generate green autofluorescence of the tooth. With QLF, the demineralized area appears opaque and darker than sound enamel. In the current study, mineral loss values were evaluated on 988 sites, with 353 sites showing a mineral loss of less than 10%, 463 sites between 10% and 20%, 131 sites between 20% and 30%, and 42 sites between 30% and 67%. Regression curve analysis Examining the relationship between the ICDAS II scores and the scores obtained using the different diagnostic tools revealed that for each ICDAS II code, each device provided a distinct average score. "To evaluate the ability to discriminate between two different scores for each assessment method, linear regression curves were calculated for each tool," the study authors wrote. "The steeper the regression curve, the higher the ability of a tool to discriminate between two values and the more useful the tool is in clinics." Normalized data linear regression showed that the SoproLife assessment tools yielded the best caries score discrimination, followed by the Diagnodent and the Spectra, they noted. In other words, "when using SoproLife, a judgment call for classification of a lesion into sound, precavitated or cavitated ... is easier to make than with the other tools," they wrote. The new SoproLife daylight and blue fluorescence codes can serve as "a distinct classification" for caries lesions, enabling practitioners to predict the histological depth of caries lesions, they added. "When comparing spot-measuring fluorescence tools with those providing high-resolution fluorescence pictures, the better visibility provided by the high-resolution tools might help prevent unnecessary operative interventions that are based solely on high fluorescence scores," the researchers concluded. "The observation capacity of such a system can guide clinicians toward a more preventive and minimally invasive treatment strategy in the course of monitoring lesion progression or remineralization over time." The authors declared that there was no conflict of interest regarding this study, which did receive grant support from Acteon.
<urn:uuid:4b40fb26-e55d-4037-a156-482d3627819c>
seed
Medications necessary for disease management can simultaneously contribute to weight gain, especially in children. Patients with preexisting obesity are more susceptible to medication-related weight gain. How equipped are primary care practitioners at identifying and potentially reducing medication-related weight gain? To inform this question germane to public health we sought to identify potential gaps in clinician knowledge related to metabolic adverse drug effects of weight gain. The study analyzed practitioner responses to the pre-activity questions of six continuing medical education (CME) activities from May 2009 through August 2010. The 20,705 consecutive, self-selected respondents indicated varied levels of familiarity with adverse metabolic effects and psychiatric indications of atypical antipsychotics. Correct responses were lower than predicted for drug indications pertaining to autism (−17% predicted); drug effects on insulin resistance (−62% predicted); chronic disease risk in mental illness (−34% predicted); and drug safety research (−40% predicted). Pediatrician knowledge scores were similar to other primary care practitioners. Clinicians’ knowledge of medication-related weight gain may lead them to overestimate the benefits of a drug in relation to its metabolic risks. The knowledge base of pediatricians appears comparable to their counterparts in adult medicine, even though metabolic drug effects in children have only become prevalent recently. Keywords:Medication effects on appetite; Insulin resistance; Drug-related weight gain; Mental illness as a risk factor for obesity; Adverse metabolic drug effects; Drug safety research; Nutrition knowledge of primary care practitioners No study to date assesses the knowledge base around medication-related weight gain in pediatric or adult primary care medicine. We therefore sought to characterize what practitioners know about metabolic drug effects in the context of clinical decision-making. Informed clinicians can often modify their patients’ risk of adverse metabolic drug effects, even when medications are essential for disease management . Practitioners can choose lowest effective dosing and therapies with fewer metabolic effects; treat underlying medical conditions which can contribute to weight gain, such as sleep apnea and hypothyroidism; correct nutritional deficiencies such as vitamins B12 and D to facilitate lifestyle adherence; and counsel patients on drug-related increases in appetite, emphasizing adherence to medication and healthful lifestyle choices. Among the patient groups most vulnerable to metabolic drug effects are children. Children are more susceptible to central nervous system effects of medications . Some metabolic drug effects are unique to children at certain growth stages and demonstrate a prolonged effect [5,6]. Metabolic drug effects also tend to be delayed relative to the therapeutic benefit, especially in children. Concurrently, drug exposure is increasing in children, the age group with the fastest growing number of prescriptions , in part due to obesity-related chronic diseases. Preexisting overweight and obesity heighten vulnerability to metabolic drug effects. Managing adverse metabolic drug affects is relatively new to the practice of pediatrics. Historically pediatricians focused on medication-related weight loss and stunting, recorded as step-offs on patient growth charts. Today’s pediatric practice may require as diligent a diagnosis and management of medication-related weight gain, especially since preexisting overweight and obesity, defined as a body mass index at or above the 85th percentile, has reached approximately 32% of the U.S. population ages 2-19 [8,9]. Disseminating drug safety updates to pediatricians holds other challenges as well. Safety information specific to children represents a recent advance. Practitioners may not realize they need to watch for such updates . Metabolic drug effects specific to children and adolescents may be first identified years after a drug is on the market because the metabolic effects in children tend to manifest beyond the timeframe of clinical trials. Disseminating drug safety information may be additionally complicated by practice patterns. For example, psychiatrists may diagnose and prescribe highly specialized treatment and look to primary care practitioners to monitor patients for adverse drug effects. Clinicians draw on their knowledge base of adverse metabolic drug effects for clinical decision-making. Elevated and unique risks of metabolic drug effects and major shifts in disease prevalence and practice patterns in pediatrics together prompted our interest in confirming that primary care clinicians who care for children have a knowledge base comparable to their adult medicine counterparts. Continuing medical education (CME) activities were developed in partnership with CME providers. Inclusion criteria for partners were: experience implementing pre-activity questions, having primary care practitioners as a target audience, willingness to co-develop programs relevant to medication-associated weight gain, providing free public access to associated media and print materials, and collaborating within time and budget constraints. Partners were selected across different media - audio, lectures, or web-based activities - Audio-Digest Foundation, Medscape CME, The Maryland Academy of Family Physicians, and The FDA Scientific Rounds Program. The instrument in this study, pre-CME activity questions, measures practitioners’ baseline knowledge relevant to the content of the CME activities. Pre-activity questions were 4-choice multiple choice questions or true-false questions. They were directed at clinical decision-making and were organized into four categories: 1) drug indications, 2) metabolic drug effects, 3) drug safety updates, and 4) patients most at risk. Each CME program partner selected among the pre-activity questions and adapted the wording to their standard format. The 6 CME activities pertained to either atypical antipsychotic use in children or obesogenic medications in general. They were provided through the CME partners at varied intervals between June 2009 and August 2010. Each activity was an audio program, web-based program, or conference lecture; and awarded a maximum of 0.5 to 2 Category 1 CME credits. The program content, participant characteristics, and pre-activity questions are presented in Table 1. Table 1. Summary of continuing medical education (CME) programs, June 2009 – August 2010 In order to compare the knowledge of practitioners specializing in pediatrics with adult medicine practitioners, we developed Activity 5 which is applicable to the care of children and adults. Activity 6 was the only program where the target audience was not primary care practitioners. The biweekly activity is attended by a diverse group of health care practitioners and scientists, all of whom work in regulation. The activity was included to better characterize practitioner knowledge of the autism indication for atypical antipsychotics. The information used for the response analysis was obtained from the CME providers as anonymized source data with no way to match responses with individuals. No personal identifiers were used. The respondents were participating in CME activities, where responses to the related learning questions are routinely aggregated to inform future CME development and related research. We analyzed the data with and without comparing it to predicted scores. Predicted scores facilitate comparison between multiple choice questions with four choices and binomial true-false questions, which differ in the likelihood of selecting a correct answer by chance alone. For this analysis, predicted scores were 70% for multiple choice and 85% for true-false questions. The basis for these numbers comes from Audio-Digest’s overall average pretest scores, which are 70%, [personal communication August 2010] and the pedagogic intent of a CME to build on participants’ existing practice-relevant knowledge. Each response holds an inherent error, since a participant with a constant knowledge base could score better or worse on the pre-activity depending on the circumstances at that moment. We estimated the two-tailed two standard deviations of this variability to be ten percent. We also analyzed the participants’ responses to the choice identified as close to correct, also called the second best answer. STATA® statistical software was used to run discrete-response regression analyses on pre-activity question responses. Probit regressions were used for binomial dependent variables, analyzing whether the respondent answered the CME question correctly. The probit models give a standard normal z-score rather than a t-statistic, with the total variability explained as a pseudo-R2 rather than a normal R2. McFadden’s pseudo-R2 is reported. The probit analysis reports the overall significance of the model using an LR chi-square. The effect of a control variable in predicting correct responses (a certain percentage above/below the average) is calculated as the difference in probability of getting a question correct versus a baseline probability. For this analysis, baseline probability is where all control variables are set to their population means. The control variables used in the probit models were educational degree, medical specialty, and CME participation date. Geographic region was only provided by some respondents and was therefore not included in the analysis. The type of medical practice such as hospital-based or solo practice was not among the data obtained by the CME providers. Partial incomplete responses were included in the analysis. Having all pre-activity questions left blank was considered equivalent to nonparticipation, and these entries were excluded. Since the sample size of the distinct CME activities varied, both the unweighted and weighted averages of correct responses are reported. To assess the extent to which the pre-activity responses could be generalized among primary care practitioners, the responses were compared across the diverse CME programs detailed in Table 1. The scores on pre-activity responses were compared to self-reported learning in Activities 2–3, where participants were asked, “Please list one concept or strategy gained from this activity.” Participant evaluations of the CME programs were recorded, to confirm satisfactory evaluations. The rating of the CME activity on a 1–5 Likert scale is a composite score which reflects practice relevance and appropriate teaching level of target population. Not all participants completed all pre-activity questions. The data was analyzed both including and excluding question-specific non-responders, to detect a potential bias introduced by partial completion. Activities 2 and 3 were the longest-running programs, each offered for 13 months. They were analyzed for a temporal trend, since a news story or regulatory change during the interval could potentially change practitioner baseline knowledge or practice patterns. There were 20,705 participants in the combined six CME activities which spanned 15 months. Each participant answered one or more of the following questions. See Table 2. For the first question, both the average correct response rate of 76% and the weighted correct response of 79% are within the predicted range. For the second question, the average correct response rate is 53% (17% below predicted) and the weighted average correct response rate is 52% (18% below predicted). Table 2. Responses to multiple choice pre-activity questions on use of antipsychotic medications See Table 3. The average correct response rate is 65% (20% below predicted) and the weighted average is 67% (18% below predicted). Table 3. Responses to the true-false pre-activity question on use of antipsychotic medications The participants in Activity 6 were asked: Recommended treatment of autism includes all EXCEPT: A Correct nutritional deficiencies 6 (12%) B Treatment of concurrent attention-deficit hyperactivity disorder 5 (10%) C Prescribe atypical antipsychotics 29 (57%) D Use behavioral therapies following early diagnosis 11 (21%) The rate of correct response, response C, is 57% (13% below predicted). Adverse metabolic effects Participants in Activity 5 were asked to respond to: After diagnosing Ed with metabolic syndrome, Ed’s doctor advised him to reduce his weight by 10%, a total of 18 pounds, by diet and exercise. Which of the following medications potentially makes it more difficult for Ed to achieve his goal? A Angiotensin-converting enzyme inhibitors 6667 (41%) B Diuretic 1243 (8%) C Vitamin D 1106 (7%) D Biguanide 7021 (43%) The rate of correct response, choice B, is 8% (62% below predicted). Specialty did not predict response to this question. Participants in Activity 5 were also asked to respond to: Within months of being diagnosed with bipolar disorder at age 14, Sara gained 20 pounds. Which of the following is likely to contribute to her recent weight gain and body mass index of 28? A Vitamin D deficiency 952 (6%) B Atypical antipsychotic agent 10349 (63%) C An eating disorder 1072 (7%) D Psychostimulant agent 3948 (24%) The average correct response rate of 63% falls within the predicted range. Predicted probability of answering the question correctly given the regression control variables is 65%. Analysis by specialty indicates that mental health specialists scored 28% better than average (z=27; p<0.01), family practitioners scored 14% higher (z=12; p<0.01), internal medicine specialists scored 9% higher (z=7; p<0.01), endocrinologists scored 8% higher (z=3; p<0.01). The regression explains 9% (pseudo-R2=0.09) of the total variability in responses and was very significant in predicting scores (LR chi-square=1833; p<0.01). Table 4 indicates the responses to a pre-activity question on adverse drug effects. See Table 5. For the first question, the average correct response rate was 61% with a weighted correct response average of 67%. For the second question, the average correct response rate was 75% with a weighted average also of 75%. These are within the predicted range. Table 4. Responses to the true-false pre-activity question on adverse drug effects Table 5. Responses to multiple choice pre-activity questions on adverse drug effects Patients at increased risk Responses to a question about vulnerable populations are presented in Table 6. The average correct response rate is 36% (34% below predicted) with a weighted average of 33% (37% below predicted). Table 6. Responses to the pre-activity question on vulnerable populations Figure 1 illustrates the correct responses compared to the predicted responses for the pre-activity question on mental illness and chronic disease risk. The responses are presented across CME activities 1–5. Standard error bars are shown. Since one of the three incorrect responses (Choice C) in the question was close to the correct answer, it may reflect a stronger knowledge base than the other two incorrect choices. We therefore included this response in the figure. Figure 1. Responses to pre-activity question on chronic disease risk in mental illness. Activity 5’s large sample size allowed for further analysis. Participants had a predicted probability of 31% in answering correctly. Participants specializing in mental health scored 14% higher than average (z=11; p<0.01) and family practitioners scored 4% higher (z=3; p<0.05). The probit regression explained 1% (pseudo-R2=0.01) of the total variability in the responses and was significant (LR chi-square=202, p<0.01). Drug safety updates Participants in Activity 5 were asked to respond to: Which of the following statements is correct? A. Comparative effectiveness trials are part of the drug approval process [5622 (34%)] B. Phase 3 clinical trials are powered to identify appetite-stimulating effects of medication [3346 (20%)] C. Incidence of weight gain can be calculated from a passive adverse events reporting system [4424 (27%)] D. Current legislation requires clinical trials in pediatric populations [2452 (15%)] On average, 15% of participants answered correctly (55% below predicted) selecting choice D. Predicted probability of answering correctly given the regression variables is 14%. Analysis by specialty indicates that pediatricians scored 7% higher than average (z=6; p<0.01) and mental health specialists scored 2% higher (z=2; p<0.02). General practitioners scored 3% below average (z=−2; p<0.03) and emergency medicine specialists scored 4% lower (z=−2; p<0.04). The regression explained 2% of the total variability in answers (pseudo-R2=0.02) and was very significant (LR chi-square=242; p<0.01). Note that this question had the highest non-response rate, with 517 (3%) of participants leaving the question blank. Regression analysis excluding non-responders had the same significant outcome variables as the analysis which included non-responders. See Table 7. The average correct response was 47% (23% below predicted) and the weighted average was 51% (19% below predicted). Table 7. Responses to pre-activity questions on drug safety information For Activity 5 (n=16,361), the top three professions of participants were nurse practitioners (52%, n=8407), physicians (38%, n=6212), and physician assistants (3%, n=476). The top specialties were psychiatry/mental health (12%, n=2022), family medicine (11%, n=1875), internal medicine (10%, n=1639), general practice (6%, n=946), and pediatrics (6%, n=906). We controlled the regression analysis for predicting correct pre-activity responses for specialty, professional degree, and date of CME participation by quartile. The time of participation was included in the regression analysis because it explained a significant portion of the variability but yielded no clear pattern for interpretation. Results of the regression analysis follow each applicable question. The results of the analysis by specialty concur with the practice demands of each specialty. For example, family physicians, practitioners who follow patients across the lifespan, were more likely to correctly identify the profound extent to which mental illness shortens life expectancy due to chronic diseases. The strength of the instrument is its ease of use in the context of CME programming, and its associated ability to identify trends in practitioner knowledge and some broad comparisons among practitioners. However since the instrument is comprised of multiple choice questions, responses to any one question are more appropriately viewed in the context of the full instrument. In order to assess the variability of the instrument, responses were compared across CME programs which varied in content, timeframe, recruitment, and question administration. Figure 1 depicts the responses. Responses among CME programs varied within the pre-established +/−10% test error, except for one program with a small sample size. The unweighted, correct response averages across CME programs are reported. To assess the extent to which recruitment methods may influence the pre-activity responses, the overall scores of the two Audio-Digest programs were compared. The two programs differ only in how the participants were recruited. They were recruited as either, subscribers or one-time participants. The 10% difference in responses falls within the pre-established test error. Practice-relevance and the perception of the CME program’s usefulness were considered in the instrument analysis. The participants in each of the 6 activities were asked to evaluate the program on a 1–5 Likert scale, 5 being the highest score. The ratings for each program ranged from 4.0 to 5.0, with an unweighted mean score of 4.5, suggesting that all were well-received and applicable to participants’ clinical practice. The pre-activity question responses were correlated with what participants said they learned from the activity. Participants in Activities 2 and 3 were asked to “Please list one concept or strategy gained from this activity.” The written responses fell into categories consistent with pre-activity responses: Pediatric indications (20), patient adherence (2), adverse effects (71), MedWatch reporting (9), drug interactions (5), and patient risk factor (8). A temporal trend in correct responses was not observed between the first three months and the total 13 months of the responses to Activities 2 and 3. Neither was any single news event or regulatory change identified which might be anticipated to influence practitioner knowledge on this topic during the study period. The childhood obesity epidemic is recent; however, practitioners who care for children appear as familiar with adverse metabolic drug effects as practitioners who care for adults. Those specializing in pediatrics performed better on a question about drug research, perhaps reflecting recent educational activities directed towards pediatricians. Across medical specialties practitioner knowledge of medication-related weight gain was low in four areas of our study. Each of the knowledge gaps if practice relevant, would overestimate the medication’s benefits or underestimate adverse metabolic effects. The net effect of each knowledge gap would therefore affect clinical decision-making in the same direction, potentially contributing to excess metabolic dysfunction. The four areas of low practitioner knowledge are as follows: Responses to questions about drug indications and the use of antipsychotics in autism suggest that some practitioners may mistake the management of aggressive symptoms for treatment of the underlying disease process. Additionally, new oral preparations are available for children who have difficulty swallowing pills. These preparations should be used before prescribing intramuscular preparations, which have greater metabolic effects and do not have pediatric use indications at the time this manuscript is written. Among the questions pertaining to adverse metabolic effects, only 8% of practitioners selected the intended response that some diuretics have been associated with promoting insulin resistance. The 41% who incorrectly selected “angiotensin-converting enzyme inhibitors” are unlikely to have been aware that some diuretics promote insulin resistance and angiotensin converting enzymes, in contrast, may be insulin sensitizing , the distinction between these two antihypertensive therapies in a patient with metabolic syndrome would be practice relevant. Furthermore, these respondents may have erroneously equated the reduction in peripheral edema with meaningful, long-term weight loss among their patients. The 43% of practitioners who incorrectly selected “biguanide” may not have realized that metformin is in this medication class, so the response would have been more informative if the answer had read “biguanide (metformin).” Responses reflected low baseline knowledge of drug safety research and MedWatch, a passive surveillance program. It is possible that practitioners lack a framework for managing the escalating drug-related information. Our findings parallel those of a recent study of physician knowledge and adverse events reporting of dietary supplements . Mental illness is associated with increased vulnerability to adverse metabolic effects. The profound chronic disease mortality among patients with mental illness was under-recognized across specialties as measured by our instrument. Awareness of the high mortality from chronic diseases among patients with mental illness might be unlikely to cause practitioners to alter the patient’s psychiatric medications; however, it would guide overall care such as screening, referring, concurrent medication prescribing, and managing co-morbidities. The knowledge gaps parallel the few peer-reviewed publications on medication-related weight gain, other than the atypical antipsychotics. Reviewing the proceeding of a large international conference on obesity revealed a similar paucity of research and translational initiatives surrounding medication-related weight gain. Additionally, current drug product information and labeling lacks a consistent format or location for communicating the potential effects of a drug on the patient’s appetite and underlying metabolism. Practitioners were familiar with the general indications for use of atypical antipsychotics in children and the adverse metabolic effects including prolactinemia, dyslipidemia, elevated liver enzymes, insulin resistance, and weight gain, findings which correlate with the lay and medical literature’s recent attention to the topic . Similarly, education initiatives about pediatric drug labeling have been directed to pediatricians and pediatricians were more knowledgeable than other practitioners about ongoing pharmacovigilance. The instrument demonstrated internal consistency across diverse CME programs (Table 1), suggesting the findings may appropriately be generalized across U.S. primary care practitioners. The sampling frame captures participants across the United States, with diverse patient populations in diverse practice settings. The degrees of the participants, nurse practitioners, physician assistants, and medical doctors, correctly represent the educational diversity of primary care practitioners. Pediatricians scored as well as their adult medicine counterparts, suggesting that future initiatives could appropriately be directed to all primary care practitioners. Additional merits of the instrument are that it can be implemented in a timely and cost-sensitive way. It can be applied to assess evidence-based practice knowledge . Study findings can provide baseline data, by which to gauge the effectiveness of future interventions. The instrument also provides a continuing education curriculum developed free of industry interests. An internet curriculum on safe medication use measurably improved clinician practice choices [19,20]. Knowledge is one of many clinical practice barriers to modifying medication-related weight gain, and merits incorporation into future initiatives. The findings, taken with the population prevalence of obesity, the emerging treatment options, and the central role of the primary care practitioner, suggest a significant prevention opportunity. Pediatricians’ knowledge base of adverse metabolic drug effects appears comparable to their counterparts in adult medicine. Regardless of medical specialty, practitioners participating in the CME programs reflected low knowledge on specific questions pertaining to drug indications, adverse metabolic effects, patient risk profiles and safety updates. Each of the four knowledge gaps would potentially influence clinical decision-making in the same manner, leading clinicians to overestimate the benefits of a drug in relation to its metabolic risks. Therefore future efforts to detail cross-specialty practitioner knowledge of metabolic drug effects and initiate education strategies to bolster knowledge could meaningfully contribute to obesity prevention. Availability of supporting data The full instrument (CME questions from all activities) is available at the journal’s request. Neither author has financial or non-financial competing interests. IK developed the CME modules in collaboration with colleagues acknowledged elsewhere in the manuscript and published CME materials. She designed the study in collaboration with the Office of Pediatric Therapeutics and drafted the manuscript. GW participated in the design of the study and performed the statistical analysis. Both authors read and approved the final manuscript. IK is a physician nutrition specialist board-certified in preventive medicine and public health. She is the editor of Advancing Medicine with Food and Nutrients, Second Edition (CRC Press, December 2012) and serves on the faculty of Johns Hopkins Bloomberg School of Public Health. As an inaugural FDA Commissioner’s Fellow she worked within the Office of Pediatric Therapeutics on nutrition-related issues, which gave rise to this research collaboration. We thank Anne Myers for her background work on pediatrician focus groups; Lon Osmond, Executive Director, Audio-Digest Foundation for his collaboration; Michelle Surrichio of the American College of Preventive Medicine for her technical assistance; and Rachel Barr of the Maryland Academy of Family Physicians for her conference preparations. This manuscript is a professional contribution developed by its authors. No endorsement by the FDA is intended or should be inferred. Centers for Disease Control and Prevention 2010. 2010. Available at: http://www.cdc.gov/nchs/data/hestat/obesity_child_07_08/obesity_child_07_08.htm webcite. [accessed Oct 17, 2010]
<urn:uuid:d7ed134d-0229-42a2-96f8-b7a31bef17c8>
seed
(Medical Xpress)—Despite early optimistic studies, the promise of curing neurological conditions using transplants remains unfulfilled. While researchers have exhaustively cataloged different types of cells in the brain, and also the largely biochemical issues underlying common diseases, neural repair shops are still a ways off. Fortunately, significant progress is being made towards identifying the broader operant principles that might bear on any one disease work-around. A review just published in Science focuses on recent work on transplanting interneurons—a diverse family of cells united by their mutual love of inhibition and their local loyalty. The UCLA-based authors, reach the conclusion that the fate of transplanted neurons ultimately depends less on the influences of the new host environment, and more on the early upbringing of the cells within the donor embryo. Interneurons are born in the lateral (LGE) and medial ganglionic eminence (MGE). Those that eventually colonize the cortex need to migrate a fairly long distance tangentially to get there, but once they arrive, they prefer to extend only local connections. By comparison, the excitatory pyramidal cells which end up sending long-range projections, are born within the cortex itself. Researchers have found that only those interneurons from the MGE have what it takes to make long migratory journeys. LGE neurons, when transplanted into postnatal host brains, remain in tight clumps whereas those from the MGE disperse throughout the cortex. More importantly, it is now appreciated that transplanted interneurons closely follow cell-intrinsic programs rather than relying on host-specific cues to govern their survival and differentiation. The once popular conception of a life and death competition for neurotrophic factors, if at play here at all, seems to be a minor influence. Herds of transplanted neurons are still thinned in the host, for example, but those that die off do so asynchronously from the endogenous interneurons, and in line with their own internal programming. Scrutinous cell accounting has shown that after transplantation, the total number of interneurons within the host tissue greatly exceeds the nominal amount normally found. An excess balance of inhibitory cells has been seen as desireable from the point of view of treating mismatches in excitability of the kind found in diseases like epilepsy. It is important to realize however, that binary electrical tallies only represent one aspect of neuronal function. Furthermore, in epilepsy, we might more generally view exciteability as just the readily observeable tip of underlying metabolic imbalance. None the less, suppression of spontaneous seizures in a mouse channelopathy model (mutant for a potassium channel known as Kv1.1) has been acheived with interneuron tranplants. In yet another case of nomenclature gone wild, this particular mutant has been associated with human interneuronopathies leading to severe tonic-clonic seizures. Synapse constitution—number, type or strength of synapses—can be tough to quantify objectively and exactly. There have been indications that transplanted interneurons make 3X the number of synapses as native interneurons, but they are only one-third as strong as would be normally expected. The keyword here is "strong." There can be any combination of synaptic capabilities involved in this idea, things like electrical amplititude, reliability, or persistence at a high rate of firing all come into play in the idea of strength. The Chandelier cells control the axon initial segments well known and idiosyncratic interneuron known as the Chandelier cell, for example, commands access to the highly coveted axon initial segment where it effectively exercises complete veto over its associated pyramidal cell. To increase the efficiency and fidelity of harvesting exact precursor cell subtypes, techniques like fluorescence-activated cell sorting (FACS) have been used in sample processing. Flourescent proteins under the control of forebrain or MGE specific promotors can be used to select individual cells types for later transplantation. To bias cells into somatostatin or parvalbumin-expressing populations, for example, wild-type MGE cells can then be exposed to sonic hedgehog or other fate-ruling factors. Transplanting different kinds of cells together will probably be necessary to properly treat many diseases. Even non-neural cells like astrocytes and microglia may be critical to have in the mix. Exciting results obtained in mice last year, indicate that these cell types can thrive not just when transplanted across individuals but across species. The goal for the present time is to define good protocols for integrating one cell type first. Nimble cells that migrate well within the host, yet confine their influence to the local environment might be the most sensible place to start. More information: Interneurons from Embryonic Development to Cell-Based Therapy, Science 11 April 2014: Vol. 344 no. 6180. DOI: 10.1126/science.1240622
<urn:uuid:876e972c-1751-4870-841c-787c54621d9d>
seed
|Home | About | Journals | Submit | Contact Us | Français| Over the past decade, researchers have shifted their focus from documenting health care disparities to identifying solutions to close the gap in care. Finding Answers: Disparities Research for Change, a national program of the Robert Wood Johnson Foundation, is charged with identifying promising interventions to reduce disparities. Based on our work conducting systematic reviews of the literature, evaluating promising practices, and providing technical assistance to health care organizations, we present a roadmap for reducing racial and ethnic disparities in care. The roadmap outlines a dynamic process in which individual interventions are just one part. It highlights that organizations and providers need to take responsibility for reducing disparities, establish a general infrastructure and culture to improve quality, and integrate targeted disparities interventions into quality improvement efforts. Additionally, we summarize the major lessons learned through the Finding Answers program. We share best practices for implementing disparities interventions and synthesize cross-cutting themes from 12 systematic reviews of the literature. Our research shows that promising interventions frequently are culturally tailored to meet patients’ needs, employ multidisciplinary teams of care providers, and target multiple leverage points along a patient’s pathway of care. Health education that uses interactive techniques to deliver skills training appears to be more effective than traditional didactic approaches. Furthermore, patient navigation and engaging family and community members in the health care process may improve outcomes for minority patients. We anticipate that the roadmap and best practices will be useful for organizations, policymakers, and researchers striving to provide high-quality equitable care. The online version of this article (doi:10.1007/s11606-012-2082-9) contains supplementary material, which is available to authorized users. In 2005, the Robert Wood Johnson Foundation (RWJF) created Finding Answers: Disparities Research for Change (www.solvingdisparities.org) as part of its portfolio of initiatives to reduce racial and ethnic disparities in health care.1 RWJF charged Finding Answers with three major functions: administer grants to evaluate interventions to reduce racial and ethnic disparities in care, perform systematic reviews of the literature to determine what works for reducing disparities, and disseminate these findings nationally. Over the past seven years, Finding Answers has funded 33 research projects and performed 12 systematic literature reviews, including the five papers in this symposium.2–6 We are now beginning to leverage this research base to provide technical assistance to organizations that are implementing disparities reduction interventions, such as those participating in RWJF’s Aligning Forces for Quality program.7 This paper summarizes the major lessons learned from the systematic reviews and provides a disparities reduction framework. Building on our prior work,8–10 we present a roadmap for organizations seeking to reduce racial and ethnic disparities in health care. This roadmap may be tailored for use across diverse health care settings, such as private practices, managed care organizations, academic medical centers, public health departments, and federally qualified health centers. Specifically, we outline the following steps: The five systematic reviews in the present symposium examined interventions to improve minority health and potentially reduce disparities in asthma, HIV, colorectal cancer, prostate cancer, and cervical cancer.2–6 While many valuable ideas to address racial and ethnic health disparities are being pursued outside of the healthcare system, Finding Answers focuses specifically on what can be accomplished once regular access to healthcare services is achieved. Thus, the reviews focused on interventions that occur in or have a sustained linkage to a healthcare delivery setting; programs that were strictly community-based were outside the scope of the project. Additionally, the reviews examined racial and ethnic disparities in care and improvements in minority health, rather than geographic, socioeconomic, or other disparities. For a description of search strategies employed in these reviews, see the technical web appendix which can be accessed online (Electronic Supplementary Material). Each review identified promising practices to improve minority health within the healthcare setting. The asthma paper found that educational interventions were most common, with culturally tailored, skills-based education showing promise.5 Outpatient support, as well as education for inpatient and emergency department patients, were effective. Similarly, the HIV review noted that interactive, skills-based instruction was more likely to be effective than didactic educational approaches for changing sexual health behavior.3 The paper identified a dearth of interventions that target minority men who have sex with men. The colorectal cancer review found that patient education and navigation were the most common interventions and that those with intense patient contact (e.g., in person or by telephone) were the most likely to increase screening rates.4 The colorectal cancer review identified no articles that described interventions to reduce disparities in post-screening follow-up, treatment, survivorship, or end-of-life care. Based on low to moderate evidence, the cervical cancer review reported that navigation combined with either education delivered by lay health educators or telephone support can increase the rate of screening for cervical cancer among minority populations.2 Telephone counseling might also increase the diagnosis and treatment of premalignant lesions of the cervix for minority women. The prostate cancer review focused on the importance of informed decision making for addressing prostate cancer among racial and ethnic minority men.6 Educational programs were the most effective intervention for improving knowledge among screening-eligible minority men. Cognitive behavioral strategies improved quality of life for minority men treated for localized prostate cancer. However, more research is needed about interventions to improve informed decision making and quality of life among minority men with prostate cancer. We looked across these reviews and Finding Answers’ previous research,11–17 and identified several cross-cutting themes. Our findings showed that promising interventions frequently were multi-factorial, targeting multiple leverage points along a patient’s pathway of care. Culturally-tailored interventions and those that employed a multi-disciplinary team of care providers also tended to be effective. Additionally, we found that education using interactive methods to deliver skills training were more effective than traditional, didactic approaches in which the patient was a passive learner. Patient navigation and interventions that actively involved family and community members in patient care showed promise for improving minority health outcomes. Finally, the majority of interventions targeted changing the knowledge and behavior of patients, generally with some form of education. Interventions directed at providers, microsystems, organizations, communities, and policies were far less common, thus representing an opportunity for future research. Table 1 summarizes the major steps health care organizations need to undertake to reduce disparities. Past efforts have focused on Step 1 (e.g. collecting performance data stratified by race, ethnicity, and language) or Step 4 (designing a specific intervention). Our roadmap highlights that these are crucial steps, but will have limited impact unless the other steps are addressed. Effective implementation and long-term sustainability require attention to all six steps. When health care organizations and providers realize there are disparities in their own practices,18 they become motivated to reduce them.19 Therefore, the Patient Protection and Affordable Care Act of 2010 makes the collection of performance data stratified by race, ethnicity, and language (REL) a priority.20 Similarly RWJF’s Aligning Forces for Quality Program initially focused its disparities efforts on the collection of REL data in different communities. The Institute of Medicine (IOM) recently recommended methods to collect REL data,21 and groups such as the Health Research and Educational Trust (HRET) have developed toolkits to guide organizations in this effort.22 Besides race-stratified performance data, training in health disparity issues (e.g., through cultural competency training) may help providers identify and act on disparities in their own practices. However, while cultural competency training and stratified performance data may increase the readiness of providers and organizations to change their behavior,19 these interventions will need to be accompanied by more intensive approaches to ameliorate disparities. Sequist et al. found that cultural competency training and performance reports of the quality of diabetes care stratified by race and ethnicity increased providers’ awareness of disparities, but did not improve clinical outcomes.23 Therefore, our roadmap for reducing disparities highlights the importance of combining REL data collection with interventions targeted towards specific populations and settings. Interventions to reduce disparities will not get very far unless there is a basic quality improvement structure and process upon which to build interventions.24,25 Basic elements include a culture where quality is valued, creation of a quality improvement team comprised of all levels of staff, a process for quality improvement, goal setting and metrics, a local team champion, and support from top administrative and clinical leaders. If robust quality improvement structures and processes do not exist, then they must be created and nurtured while disparities interventions are developed. For too long, disparities reduction and quality improvement have been two different worlds. People generally thought about reducing disparities separately from efforts to improve quality, and oftentimes different people in an organization were responsible for implementing disparity and quality initiatives. A major development over the past decade is the increasing recognition that equity is a fundamental component of quality of care. Efforts to reduce disparities need to be mainstreamed into routine quality improvement efforts rather than being marginalized.26 That is, we need to think about the needs of the vulnerable patients we serve as we design interventions to improve care in our organizations, and address those needs as part of every quality improvement initiative. The Institute of Medicine’s Crossing the Quality Chasm report stated that equity was one of six components of quality,27 and the IOM’s 2010 report Future Directions for the National Healthcare Quality and Disparities Reports highlighted equity further by elevating it to a cross-cutting dimension that intersects with all components of quality care.28 Major health care organizations have instituted initiatives that promote the integration of equity into quality efforts including the American Board of Internal Medicine (Disparities module as part of the recertification process), American College of Cardiology (Coalition to Reduce Racial and Ethnic Disparities in Cardiovascular Disease Outcomes [CREDO] initiative),29 American Medical Association (Commission to End Health Care Disparities), American Hospital Association (Race, ethnicity, and language data collection),22 Joint Commission (Advancing Effective Communication, Cultural Competence, and Patient- and Family-Centered Care: a Roadmap for Hospitals),30 and National Quality Forum (Healthcare Disparities and Cultural Competency Consensus Standards Development). For many health care organizations and providers, this integration of equity and quality represents a fundamental change from generic quality improvement efforts that improve only the general system of care, to interventions that improve the system of care and are targeted to specific priority populations and settings. While several themes have emerged regarding successful interventions to reduce health care disparities based on our systematic reviews and grantees, solutions must be individualized to specific contexts, patient populations, and organizational settings.31 For example, solutions for reducing diabetes disparities for African-Americans in Chicago may differ from the answers for African-Americans in the Mississippi Delta. We recommend determining the root causes of disparities in the health care organization or provider’s patient population and designing interventions based on a conceptual model that targets six levels of influence: patient, provider, microsystem, organization, community, and policy (Table 2).8,9 Each level represents a different leverage point that can be addressed to reduce disparities. The relative importance of these levels may vary across diverse organizations and patient populations. Specific intervention strategies can then be developed to target different levels of influence. Table 3 offers an overview of strategies identified through the review of approximately 400 disparities intervention studies, including the 33 Finding Answers projects and 12 systematic literature reviews. Common intervention strategies include delivering education and training, restructuring the care team, and increasing patient access to testing and screening. About half of the interventions targeted only one of the levels of influence described above; most efforts were directed at patients in the form of education or training. Research evaluating pay-for-performance, on the other hand, was scant and requires further attention, especially given current interest in incentive-based programs. Going forward, Finding Answers aims to categorize each of the approximately 400 studies by level of influence and strategy, and to identify which combinations are promising for disparities reduction. Organizations can find practical resources and promising intervention strategies on the Finding Answers website (www.solvingdisparities.org) or the Agency for Healthcare Research and Quality (AHRQ) Health Care Innovations Exchange (www.innovations.ahrq.gov). Systematic reviews such as those by Finding Answers and forthcoming ones from the AHRQ Evidence-Based Practice Center Program and the Veterans Administration can inform what types of interventions are most appropriate in different situations. In addition, organizations can learn about successful projects from peers through learning collaboratives,24 site visits, case studies, and webinars. While there is no silver bullet to reduce disparities, successful interventions reveal important themes. As previously noted, we looked across 12 systematic reviews of the literature and identified promising practices that can inform the design of future disparities interventions.2–6,11–17 These include culturally tailoring programs to meet patients’ needs, patient navigation, and engaging multidisciplinary teams of care providers in intervention delivery. Effective interventions frequently target multiple leverage points along a patient’s pathway of care and actively involve families and community members in the care process. Additionally, successful health education programs often incorporate interactive, skills-based training for minority patients. The National Institutes of Health recently held its fifth annual conference on the science of dissemination and implementation to promote further research in this field, create opportunities for peer-to-peer learning, and showcase available models and tools. One such model is the Consolidated Framework for Implementation Research (CFIR), for which Damschroder et al. reviewed conceptual models of relevant factors in implementing a quality improvement intervention and synthesized existing frameworks into a single overarching model.32 The CFIR covers five domains: intervention characteristics (e.g. relative advantage, adaptability, complexity, cost), outer setting (e.g. patient needs and resources, external policy and incentives), inner setting (e.g. culture, implementation climate, readiness for implementation), characteristics of the individuals involved (e.g. knowledge and beliefs about the intervention, self-efficacy, stage of change), and the process of implementation (e.g. planning, engaging, executing, evaluating). Too often organizations focus on the content of an intervention without planning its implementation in sufficient detail. A model such as CFIR supplies a checklist of factors to consider in implementing an intervention to reduce disparities. Through work with our 33 grantees, we have developed a series of best practices for implementing interventions to reduce disparities. These lessons were pulled from detailed qualitative data gathered through the Finding Answers program, and represent perspectives from organization leadership, providers, administrators, and front-line staff. We found common implementation challenges and solutions across health care settings. Table 4 summarizes best practices for disparities reduction efforts, provides the rationale and expected outcomes, and offers recommended strategies for delivering a high-quality equity initiative. Implementation is an iterative process and organizations are unlikely to get the perfect solution on their first attempt. Thus, evaluation of the intervention and adjustments to the program based on performance data stratified by race, ethnicity, and language are integral parts of the implementation process. Setting realistic goals is essential to accurately assess program effectiveness. Processes of care (e.g. measurement of hemoglobin A1c in patients with diabetes) generally improve more rapidly than patient outcomes (e.g. actual hemoglobin A1c value), and may therefore be better markers of short-term disparities reduction success, while outcomes could be longer-term targets. Health care organizations, administrative leaders, and providers need to proactively plan for the sustainability of the intervention. Sustainability is dependent upon institutionalizing the intervention and creating feasible financial models. Too often interventions are dependent upon the initial champion and first burst of enthusiasm. If that champion leaves the organization or if staff tire after the early stages of implementation, then the disparities initiative is at risk for discontinuation. Institutionalization requires promoting an organizational culture that values equity, creating incentives to continue the effort, whether financial and/or non-financial, and weaving the intervention into the fabric of everyday operations so that it is part of routine care as opposed to a new add-on (e.g. Step 3 in Table 1). In the long-term, however, interventions must be financially viable. The business case for reducing disparities is evolving and must be viewed from both societal and individual organization/provider perspectives.33–35 From a societal perspective, the business case for reducing disparities centers on direct medical costs, indirect costs, and the creation of a healthy national workforce in an increasingly competitive global economy. Laveist et al. estimate that disparities for minorities cost the United States $229 billion in direct medical expenditures and $1 trillion in indirect costs between 2003 and 2006.36 America’s demographics are becoming progressively more diverse. The United States Census Bureau estimates that by 2050, the Hispanic population will reach 30 %, the black population 13 %, and the Asian population 8 %.37 Thus, from global and national economic perspectives, disparities reduction will become increasingly important if we are to have a healthy workforce that can successfully compete in the international marketplace and support the rapidly growing non-working aging population on the Social Security and Medicare entitlement programs. From the perspective of the individual health care organization or provider, the immediate incentives are more complex. Integrated care delivery systems have an incentive to reduce disparities to decrease costly emergency department visits and hospitalizations. Large insurers are incentivized to provide high quality care for everyone to be more competitive in marketing their products to employers with increasingly diverse workforces. However, outpatient clinics and providers in the current, predominantly fee-for-service world, especially those serving the uninsured and underinsured, frequently do not have clear incentives to reduce disparities since the money saved from the prevented emergency department visit or hospitalization does not accrue to them.34 Currently, it is difficult to accurately predict the results of health care reform and efforts to contain the Medicare and Medicaid budgets, but several trends indicate that organizations would be wise to integrate disparities reduction into their ongoing quality improvement initiatives. Major national groups such as the Department of Health and Human Services (HHS), Agency for Healthcare Research and Quality, Centers for Disease Control (CDC), Centers for Medicare and Medicaid Services, and Institute of Medicine have consistently stressed the importance of reducing health care disparities and using quality improvement as a major tool to accomplish this goal.28,38–42 The Affordable Care Act emphasizes collection of race, ethnicity, and language data.20 Private demonstration projects, such as the Robert Wood Johnson Foundation Aligning Forces for Quality Program,7 aim for multistakeholder coalitions of providers, payers, health care organizations, and consumers to improve quality and reduce disparities on regional levels. Intense policy attention has been devoted to accountable care organizations,43 the patient-centered medical home,44 and bundled payments.45 These organizational structures and financing mechanisms emphasize coordinated, population based care that may reduce disparities. Reducing racial and ethnic disparities in care is the right thing to do for patients, and, from a business perspective, health care organizations put themselves at risk if they do not prepare for policy and reimbursement changes that encourage reduction of disparities. We believe that health care organizations and providers would be imprudent if they did not plan for payment and coverage possibilities such as: As outlined in our roadmap, it is critical to create an organizational culture and infrastructure for improving quality and equity. Organizations must design, implement, and sustain interventions based on the specific causes of disparities and their unique institutional environments and patient needs. To be most effective, all of these elements eventually need to be addressed; 24 however, we do not want to encourage paralysis for those who might perceive a daunting set of obstacles to overcome. Instead, our experience has been that it useful for an organization to start working on disparities by targeting whatever step or action feels right to them and is thus a priority.46 Eventually the other steps will need to be addressed, but reducing disparities is often a dynamic process that evolves over time. While more disparities intervention research is needed, we have learned much over the past 10 years about which approaches are likely to succeed. The time for action is now. We would like to thank Melissa R. Partin, PhD, who served as the JGIM Deputy Editor for the six manuscripts in this Special Symposium: Interventions to Reduce Racial and Ethnic Disparities in Health Care. Dr. Partin provided valuable advice and feedback throughout this project. Marshall H. Chin, MD, MPH, and Amanda R. Clarke, MPH, served as the Robert Wood Johnson Foundation Finding Answers: Disparities Research for Change Systematic Review Leadership Team that oversaw the teams writing the articles in this symposium. Support for this publication was provided by the Robert Wood Johnson Foundation Finding Answers: Disparities Research for Change Program. The Robert Wood Johnson Foundation had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, approval, or decision to submit the manuscript for publication. Presented in part at the Society of General Internal Medicine Midwest Regional Meeting, September 23, 2010, Chicago, Illinois; the Society of General Internal Medicine Annual Meeting, May 5, 2011, Phoenix, Arizona; the American Public Health Association Annual Meeting, November 1, 2011, Washington, D.C.; and the Institute for Healthcare Improvement Annual National Forum, December 4, 2011, Orlando, Florida. The authors report no conflicts of interest with this work. Dr. Chin was also supported by a National Institute of Diabetes and Digestive and Kidney Diseases Midcareer Investigator Award in Patient-Oriented Research (K24 DK071933), Diabetes Research and Training Center (P60 DK20595), and Chicago Center for Diabetes Translation Research (P30 DK092949).
<urn:uuid:c82e5701-927c-48f2-bc93-e22ce0b83dc6>
seed
|A service of the U.S. National Library of Medicine®| On this page: Reviewed April 2006 What is the official name of the MITF gene? The official name of this gene is “microphthalmia-associated transcription factor.” MITF is the gene's official symbol. The MITF gene is also known by other names, listed below. Read more about gene names and symbols on the About page. What is the normal function of the MITF gene? The MITF gene provides instructions for making a protein called microphthalmia-associated transcription factor. This protein plays a role in the development, survival, and function of certain types of cells. To carry out this role, the protein attaches to specific areas of DNA and helps control the activity of particular genes. On the basis of this action, the protein is called a transcription factor. Microphthalmia-associated transcription factor helps control the development and function of pigment-producing cells called melanocytes. Within these cells, this protein also controls production of the pigment melanin, which contributes to hair, eye, and skin color. Melanocytes are also found in the inner ear and play an important role in hearing. Additionally, microphthalmia-associated transcription factor regulates the development of specialized cells in the eye called retinal pigment epithelial cells. These cells nourish the retina, the part of the eye that detects light and color. Some research indicates that microphthalmia-associated transcription factor also regulates the development of cells that break down and remove bone (osteoclasts) and cells that play a role in allergic reactions (mast cells). Microphthalmia-associated transcription factor has a particular structure with three critically important regions. One region, known as the basic motif, binds to specific areas of DNA. Other regions, called the helix-loop-helix motif and the leucine-zipper motif, are critical for protein interactions. These motifs allow molecules of microphthalmia-associated transcription factor to interact with each other or with other proteins that have a similar structure. These interactions produce a two-protein unit (dimer) that functions as a transcription factor. Does the MITF gene share characteristics with other genes? The MITF gene belongs to a family of genes called bHLH (basic helix-loop-helix). A gene family is a group of genes that share important characteristics. Classifying individual genes into families helps researchers describe how genes are related to each other. For more information, see What are gene families? in the Handbook. How are changes in the MITF gene related to health conditions? Where is the MITF gene located? Cytogenetic Location: 3p14.2-p14.1 Molecular Location on chromosome 3: base pairs 69,739,435 to 69,968,337 The MITF gene is located on the short (p) arm of chromosome 3 between positions 14.2 and 14.1. More precisely, the MITF gene is located from base pair 69,739,435 to base pair 69,968,337 on chromosome 3. See How do geneticists indicate the location of a gene? in the Handbook. Where can I find additional information about MITF? You and your healthcare professional may find the following resources about MITF helpful. You may also be interested in these resources, which are designed for genetics professionals and researchers. What other names do people use for the MITF gene or gene products? See How are genetic conditions and genes named? in the Handbook. Where can I find general information about genes? The Handbook provides basic information about genetics in clear language. These links provide additional genetics resources that may be useful. What glossary definitions help with understanding MITF? acids ; amino acid ; dimer ; DNA ; epithelial ; gene ; hypopigmentation ; leucine ; mast cells ; melanin ; melanocytes ; motif ; pigment ; pigmentation ; protein ; retina ; syndrome ; transcription ; transcription factor You may find definitions for these and many other terms in the Genetics Home Reference Glossary. See also Understanding Medical Terminology. References (11 links)
<urn:uuid:a3d121da-2adf-4328-936e-730ed1b25763>
seed
Psychiatry has begun the laborious effort of preparing the DSM-V, the new iteration of its diagnostic manual. In so doing, it once again wrestles with the task set by Carl Linnaeus, to "cleave nature at its joints." However, these "joints," the boundaries between psychiatric disorders, such as that between bipolar disorder and schizophrenia, are far from clear. Prior versions of DSM followed the path outlined by Emil Kraeplin in separating these disorders into distinct categories. Yet, we now know that symptoms of bipolar disorder may be seen in patients with schizophrenia and the reverse is true, as well. Further, our certainty about the boundary of these disorders is undermined by growing evidence that both schizophrenia and bipolar disorder emerge, in part, from the cumulative impact of a large number of risk genes, each of which conveys a relatively small component of the vulnerability to these disorders. And since many versions of these genes appear to contribute vulnerability to both disorders, the study of common gene variations has raised the possibility that there may be diagnostic, prognostic, and therapeutic meaning embedded in the high degree of variability in the clinical presentations of patients with each disorder. In addition, many symptoms of schizophrenia and bipolar disorder are traits that are present in the healthy population but are more exaggerated in patient populations. To borrow from Einstein, who struggled to reconcile the wave and particle features of light, our psychiatric diagnoses behave like waves (i.e., spectra of clinical presentations) and particles (traditional categorical diagnoses). Although new genetic approaches may revise our current thinking, such as studies of microdeletions, microinsertions, and microtranslocations of the genome, the wave/particle approach to psychiatric diagnosis places a premium on understanding the "real" clustering of patients into subtypes as opposed to groups created to correspond to the current DSM-IV. Latent class analysis is one statistical approach for estimating the clustering of subjects into groups. In their study of 270 Irish families, published in the July 15th issue of Biological Psychiatry, Fanous and colleagues conducted this type of analysis, with subjects clustered into the following groups: bipolar, schizoaffective, mania, schizomania, deficit syndrome, and core schizophrenia. When they divided the affected individuals in the study using this approach, they found four regions of the chromosome that were linked to the risk for these syndromes that were not implicated when subjects were categorized according to DSM-IV diagnoses. Dr. Fanous notes that this finding "suggests that schizophrenia as we currently define it may in fact represent more than one genetic subtype, or disease process." According to John H. Krystal, M.D., Editor of Biological Psychiatry and affiliated with both Yale University School of Medicine and the VA Connecticut Healthcare System: "Their findings advance the hypothesis that the variability in the clinical presentation of patients diagnosed using DSM-IV categories is meaningful, providing information that may be useful as DSM-V is prepared. However, we do not yet know whether the categories generated by this latent class analysis will generalize to other populations." This paper highlights an important aspect of the complexity of establishing valid psychiatric diagnoses using a framework adopted from traditional categorical models. - Fanous et al. Novel Linkage to Chromosome 20p Using Latent Classes of Psychotic Illness in 270 Irish High-Density Families. Biological Psychiatry, 2008; 64 (2): 121 DOI: 10.1016/j.biopsych.2007.11.023 Cite This Page:
<urn:uuid:31e91eb3-e2c3-4481-a89e-c3e93d199343>
seed
Interview conducted by April Cashin-Garbutt, BA Hons (Cantab) What is LDL cholesterol? What blood level of LDL cholesterol is considered optimal and why are high levels of LDL cholesterol a key marker of death risk from heart disease? Cholesterol is a lipid that is both produced in the liver and gained through food intake. Some amount of cholesterol, which is transported through the bloodstream in lipoproteins, is essential for normal body function. There are different types of lipids or fats, including low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides. While HDL (“good”) cholesterol is carried from parts of the body to the liver, which removes the cholesterol from the body, high levels of LDL (“bad”) cholesterol remain in the bloodstream and can cause arterial clogging, increasing the risk of stroke and heart disease. Blood lipid levels are the primary biomarkers for cardiovascular disease, which accounts for one in every three deaths in America. Every 10 mg/dL decline in LDL cholesterol is associated with an approximately 5-13% decline in major vascular disease events, such as strokes and mortality. LDL cholesterol levels of 100 mg/dL or lower are considered optimal by the American Heart Association, while LDL cholesterol levels of 100-129 mg/dL are considered near or above optimal, 130 to 159 mg/dL is borderline high, 160 to 189 mg/dL is high, and 190 mg/dL is considered very high. To support heart health, it is very important to maintain the optimal LDL cholesterol levels. Treatments typically include lifestyle modification and may include therapy with lipid-lowering medications such as statins. How have LDL cholesterol blood levels changed over the past several decades? Average blood cholesterol values, the primary cardiovascular disease biomarker, have declined in the United States since at least 1960. Results of three National Health and Nutrition Examination Surveys (NHANES) of nearly 40,000 patients for the years 1988 to 2010 demonstrated that LDL cholesterol levels have declined in the United States while the use of lipid-lowering medications has increased. These trends are also reflected in the mortality rates attributable to cardiovascular disease, which declined by approximately 60% from 1970 through 2000, and by 30% from 2000 through 2009. These improvements are due largely to increased use of evidence-based medical therapies, such as statins, which lower lipid levels, as well as lifestyle changes, such as diet and exercise. Based on these factors, the American Heart Association (AHA) 2020 Strategic Impact Goals target a 20% relative improvement in overall cardiovascular health for all Americans. But as the latest Quest Diagnostics Health Trends’ study suggests, improvements in cholesterol levels may have stalled. When did it come to your attention that the declines in LDL cholesterol blood levels had come to an end? We were not aware of this pattern until we produced our latest Quest Diagnostics Health Trends study. These are studies based on analysis of the company’s diagnostic data. Our study is the first nationally-representative analysis to show that improvements in the United States in LDL cholesterol blood levels, a key marker of death risk from heart disease, abruptly ended in 2008, and may have stalled since. Specifically, we found a 13% decline in the annual mean LDL cholesterol level of the study population over the 11-year period, similar to the NHANES data. However, we also found the decline ended in 2008, and stalled between 2009 and 2011, the last year we studied. The peer-reviewed, open access journal PLOS ONE published the study in May 2013. What sparked researchers at Quest Diagnostics to investigate this sudden end to LDL cholesterol blood level declines? A team of researchers at Quest Diagnostics was inspired to perform the study after NHANES published its data showing declining blood cholesterol values from 1999 through 2010. As we began our analysis, we had no pre-existing theories regarding trends in LDL cholesterol levels; in fact, we assumed we might find a continuation of the same trends that had occurred over the last fifty years. The finding that LDL cholesterol levels have plateaued since 2008 is novel. What did the study involve? Our study examined de-identified low-density lipoprotein blood-serum cholesterol test results of nearly 105 million individual adult patients of Quest Diagnostics of both genders in all 50 states and the District of Columbia from 2001-2011. The study is the largest of LDL cholesterol levels in an American population, and the first large-scale analysis to include data from recent years 2009-2011. Other studies that have examined population trends in LDL cholesterol have been constrained by smaller populations, shorter study periods, and smaller geographical coverage. Our study reported data annually whereas most recently published studies, such as the NHANES research, report results in time periods that cover multiple years, which may mask the plateau observed in our study. In addition to finding that LDL levels stalled, did your study provide any other notable insights? Yes, we found differences between men and women. Specifically, we found a slightly greater decline in LDL cholesterol levels among men compared to women. These differences may reflect meaningful differences in the prescription rate and effectiveness of lipid-lowering interventions, including statins and lifestyles, between genders. The differences may also be due in part to under-appreciation of heart disease risk in women. Medical understanding of differences in heart disease risks by gender is relatively new. For instance, female-specific American Heart Association guidelines for women were introduced only in 1999. More investigation is needed to understand the reasons for the gender differences. What hypotheses were put forward as the reasons behind this trend? It’s reasonable to hypothesize that the economic recession, which began at about the same time that LDL cholesterol values flattened in our study, possibly played a role in the plateau of LDL cholesterol levels. Patients dealing with financial constraints may have been less inclined to visit their physician or use their medications at full dose, limiting access to and effectiveness of treatment. Individuals may also have experienced changes in stress levels, diet, sleep and other behaviors, due to the poor economy, which in turn may have adversely impacted lipids. It’s also possible that statins users in the study may have reached the maximum therapeutic-threshold level or that increases in obesity prevalence or other co-morbid factors during the 11 years of the study period contributed to the LDL cholesterol plateau. Analysis of these theories falls outside the purview of our study, but we believe they warrant additional investigation. What can be done to reverse this trend? We hope this new study will encourage additional population research to inform public health efforts. But we also believe the study should prompt individual patients to be vigilant about practicing healthy behaviors and lipid-lowering treatment plans. The most important lesson to be gleaned from our study is that patients need to remain engaged in their health care and to communicate with their physicians. Given the high mortality rate from cardiovascular disease, this is especially important with heart health. If economic or other factors will potentially affect the ability of patients to maintain a consistent treatment regimen, they should talk freely, honestly and without embarrassment to their clinician regarding all possible options. Our hope is physicians and patients will have more productive conversations about the importance of LDL control to cardiovascular health as a result of this study. What are Quest Diagnostics’ plans for the future? Quest Diagnostics is focused on developing and offering diagnostic innovations along a continuum of care. We are particularly interested in diagnostics that can help prevent or arrest disease – that is, diagnostic services that can help identify risk factors for disease, thereby potentially helping physicians to prevent its onset, or disease in early treatable stages. Certainly, this Quest Diagnostics Health Trends study speaks to the need for the medical community and patients to be vigilant in taking steps to identify heart health risks before disease occurs. The prevention of disease is always the optimal outcome. Where can readers find more information? Please visit our website at www.QuestDiagnostics.com or access the study at www.QuestDiagnostics.com/HealthTrends About Dr. Harvey Kaufman Harvey W. Kaufman, M.D., is Senior Medical Director for Quest Diagnostics and the company’s Medical Director for its General Health and Wellness business. He is also the principal medical investigator for Quest Diagnostics Health Trends studies, and has served in a variety of roles for the company for more than 20 years. Dr. Kaufman graduated from Massachusetts Institute of Technology (S.B. and S.M.), Boston University School of Medicine (M.D.), and New York University's Leonard N. Stern School of Business (M.B.A. with Distinction). Dr. Kaufman is board certified in Anatomic and Clinical Pathology and Chemical Pathology. He serves on various national and local organizations, including the Quest Diagnostics Foundation.
<urn:uuid:3515e6b7-60bb-42f3-a83e-1becad6acb23>
seed
|Home | About | Journals | Submit | Contact Us | Français| The thalassemias are among the most common genetic disorders worldwide, occurring more frequently in the Mediterranean region. The aim of this study was to determined frequency of sensory-neural hearing loss in major ß- thalassemias transfusion dependent patients in south of Iran. A cross sectional study on 308 cases of major beta-thalassemia patients referring to Thalassemia Center of Shiraz University of Medical Sciences between 2006–2007 years. The diagnosis of ß- thalassemia major was based on clinical history, complete blood count and hemoglobine electrophoresis. Clinical data such as serum ferritin level, deferoxamine (DFO) dose, mean daily doses of DFO (mg/kg) and audiometric variables was recorded. Out of 308 cases, 283 (96.5%) had normal hearing and 10 (3.5%) sensorineural hearing loss. There was no statically significant difference between two groups regarding mean age, weight, age at the first blood transfusion, age at the first DFO infusion. We found the lowest incidence of sensorineural hearing loss in a large population of patients suffered from major thalassemia who received DFO. We show that DFO is not ototoxic at a low dose. When considering all related literature, as a whole there has been much critical misrepresentation about DFO ototoxicity. The thalassemias are among the most common genetic disorders worldwide, occurring more frequently in the Mediterranean region, the Indian subcontinent, Southeast Asia, and West Africa. Some authors found that about 20%–29% of cases suffering from sensory- neural hearing loss (SNHL).They proposed deferoxamine (DFO) gives rise to SNHL[1, 2]. However others have challenged this idea and believed that the incidence of SNHL in β-thalassemias is not higher than general population[3, 4]. Injection of 600 mg DFO/kg per day for 30 days in guinea pigs, increased auditory thresholds and loss of inner ear hair cells. In contrast, no effect on auditory function had been found in studies of chinchilla and mice[6, 7]. In a research by Ryals et al., in the experimental quail, DFO was injected daily for 30 days at either 750 mg/kg or 300 mg/kg body weight. These dosages were above the limits considered potentially ototoxic in humans. Then the morphology of the supporting cells and hair cells was studied. At the higher dose of deferoxamine, morphological changes were intensified and began to extend to hair cells. They perceived that DFO clearly has the potential to cause damage to the avian inner ear. Ryals et al.'s study suggests that high doses and prolonged administration of the drug are required for this toxicity to be observable. Because of these controversies and large variability in incidence of SNHL in these patient in the English literatures and because Fars province is a place with high prevalence of the thalassemias in Iran it is worth studying the prevalence of SNHL in relatively large and adequate population of these patients, in order to provide hearing monitoring protocols for this population in the era of managed care. We undertook cross sectional study on 308 cases of major beta-thalassemia patients referring to Thalassemia Center of Shiraz University of Medical Sciences between 2006–2007 years. The study was approved by Shiraz Medical Sciences ethics committee and written consent was taken before starting the study. Exclusion criteria were: cases with past-history of ear operation (such as tympanomastoidectomy, myringotomy and ventilation tube); individuals exposed to ototoxic medication except DFO; cases with preexistence hearing loss and abnormal physical exams (such as chronic otitis media; otitis media with effusion, myringosclerosis). The diagnosis of ß- thalassemia major was based on clinical history, complete blood count and hemoglobine electrophoresis. All enrolled patients underwent an otolaryngological visit, microscopic otoscopy. Clinical data such as serum ferritin level, DFO dose, mean daily doses of DFO (mg/kg), mean of serum ferritin level in last 3 years, volume of transfusion of pack cell, mean hemoglobin and hearing status were recorded in a specially formatted questionnaire. Variables that used for evaluation of hearing status were pure tone air and bone conduction thresholds of 250–8000 Hz, speech discrimination threshold (SDS) and speech reception threshold (SRT). Normal hearing was defined as being between 0 and 20 decibels (dB), and ototxicity as a hearing loss of 20 dB or more at two or more adjacent frequencies. Statistical Analysis was performed using statistical analysis software SPSS software version 11.5. The descriptive variables such as mean, median, standard deviations were used. Chi Square was performed for compare of information about a group of patients with hearing loss and without hearing loss. The P value less than 0.05 was considered significant. From total 308 cases of major beta-thalassemia, 15 cases were excluded from the study due to abnormal otologic history or physical examination. Regarding otologic history, there were 4 cases with history of ear operation (two cases with tympanomastoidectomy and one case with myringotomy and ventilation tube) and one case had positive history of hearing loss since 6 months old. We detected 10 cases with abnormal physical exams: 7 case had otitis media with effusion, and 3 case had myringosclerosis Finally 283 (96.5%) had normal hearing and 10 (3.5%) abnormal hearing. Of these patients, 5 had bilateral symmetric hearing loss and 5 unilateral. There was no statically significant difference between two groups regarding mean age, weight, age at the first blood transfusion, age at the first DFO infusion (Table 1). Prevalence of right ear sensory-neural hearing loss in frequencies of 250, 500, 1000 and 2000 Hz was zero, in 3000 Hz, 0.3%, in 4000 Hz, 1%, and in 8000 Hz, 2.6%. In left ear in frequencies of 250, 500, 1000 Hz was zero, in 2000, 3000, 4000 Hz, 0.3% and in 8000 Hz was 2.3%. There were 7 patients with sensory-neural hearing loss hearing loss in only one frequency and 3 patients (1% of patients) in 2 or more consecutive frequency. Not much was known about the impact of the major β-thalassemia disease and the toxicity of DFO therapy on hearing organ in southern Iran. We found an incidence of only 3.5% SNHL in a large population of patients. In fact conflicting reports and great discrepancy between the incidences of hearing impairment has a long and rich history. It has appeared in the literature during the past 30 years. The first experience of de Virgiliis group in 1975 when they reported high-tone sensorineural hearing loss in 14 of 20 patients with beta-thalassaemia major, and later in 1979 when the same group reported moderate unilateral or bilateral high-tone sensorineural deafness in 43 of 75 patients, however, all patients were receiving chelation therapy with DFO, but de Virgiliis et al did not consider this to be causative. Several authors have studied the ototoxicity of DFO, Some studies have reported frequencies of SNHL between 7.4% and 33%[2, 12]. Despite small sample size in most of the studies, no statistically significant differences were found between the affected and unaffected groups with respect to age, ferritin levels or lengths of time that they had received and dose of DFO, peak DFO dose and iron overload[2, 4, 12–14]. The therapeutic index suggested by Porter et al also was not helpful in predicting risk for ototoxicity. The prevalence of hearing loss in thalassaemia patients in other studies were 25% in Olivieri et al (n=89), 33% in Barratt et al. (n=27), 24% in Porter et al. (n= 37), 15.5% in Argiolu et al (n=308), 27% in Kontzolglulou et al (n=88), 29% in Styles et al (n=28) and 3.4% in our study. In a recent study by Shamsian et al. they found the incidence of 7.4% SNHL in 67 patients suffered from major β-thalassemia who treated with deferoxamine. They defined hearing loss as a hearing threshold more than 15 dB. Although they did not report what frequencies involved specifically, the researchers detected there was no association between serum ferritin level or DFO dosage and hearing loss. Although there is discrepancy in the rates between our study and foregoing reports, the difference may be as a result of our definition for hearing loss, ototoxicity, and exclusion criteria. We reviewed the audiograms of Barratt et al study; three of those patients, had a history of recurrent acute ear infections. In three patients whose hearing loss was only above 6 kHz, bone conduction could not be assessed by the authors. We also reviewed the findings of Porter et al Survey, from all 9 cases, in 5 patients hearing loss was only in one frequency above 6 kHz. Actually in Styles and Vichinsky's study we see that of nine patients with abnormal audiograms in 5 of them one frequency was abnormal. In a research by Karimi et al, 128 patients receiving subcutaneous DFO in doses from 21 to 39 mg/kg/day were studied in 2002. Patients had received their total weekly dose of DFO according to two different methods. The first group had received it on an every other day basis and the second group had received it on 6 days a week. Of the patients in the first group 44.7% had hearing loss in the right ear and 41.8% in the left ear only at 8000 Hz frequency, compared to the second group, 27.8 and 23%, respectively. A significant correlation was found between the dose of drug given at each episode of DFO therapy and hearing loss at the frequency of 8,000 Hz. They concluded that DFO ototoxicity is determined not only by the total amount of the drug given, but also by its maximal plasma concentration. However they reported higher frequency of SNHL than other authors, they considered that hearing loss was significant only at one (8000 Hz) frequency. A retrospective controlled study by Masala et al showed a 12% rate of SNHL in patients with thalassemia treated with DFO. The control group of normal patients showed a 10% rate of SNHL. They found no significant difference between thalassemic patients and controls, and concluded that there were inadequate data for DFO otoxicity. Similar findings were reported by Cohen et al who found that 49 out of 52 patients treated with DFO, had no auditory or visual abnormalities. The lack of ototoxic side effects at lower doses can be considered in good harmony with clinical reports of the low incidence of toxic side effects of DFO[5, 20–23]. In a recent national health survey by Agrawal et al, on the prevalence of hearing loss among US adults, they found in the youngest age group (20–29 years), 8.5% showed high -frequency hearing loss, and the prevalence seems to be growing among this age group. Other authors have agreed with this opinion that the DFO dose generally used (<50 mg kg/ day) is not ototoxic. They report a frequency of hearing loss similar to that in the normal population. Ambrosetti et al in a review of 38 adult patients with thalassemia major support this view since in their patients SNHL was related to neither therapeutic index, nor serum ferritin levels. Furthermore, the percentage of patients with SNHL is similar to that in the normal population of the same age (15–35%). Ambrosetti et al. data suggest that no difference exists between thalassemic patients and non-thalassemic population, and it is adequate to conclude that DFO is not ototoxic. We herein report the lowest incidence of hearing impairment in a large population of patients suffered from major thalassemia who received deferoxamine. We did not find difference between patients with and without SNHL regarding mean age, weight, age at the first blood transfusion and age at the first DFO infusion. We found that desferrioxamine is not ototoxic at a low dose, as a whole there has been much critical misrepresentation and conflicting data about desferrioxamine ototoxicity in literature. This study presents new statistically valid information to physicians to help them a proper decision making regarding otologic problem in such cases. And the authors emphasize that physicians must attempt to clarify the other causes of hearing loss in patients suffered from thalassemia. Therefore hearing monitoring protocol must be structured according to the particular characteristics of each individual patient, such as age, capability to respond to the audiologic tests, and clinical status. This work was supported by a grant from Voice Chancellor for Research of Shiraz University of Medical Sciences and Dr Rooshanzamir for data collation.
<urn:uuid:a7ccd0e6-20b5-445e-82f4-70dbd56a7917>
seed
Rather than testing for individual marker genes or proteins, researchers at the University of California, San Diego (UC San Diego) and the Moores UCSD Cancer Center have evidence that groups, or networks, of interactive genes may be more reliable in determining the likelihood that a form of leukemia is fast-moving or slow-growing. One of the problems in deciding on the right therapy for chronic lymphocytic leukemia (CLL) is that it is difficult to know which type a patient has. One form progresses slowly, with few symptoms for years. The other form is more aggressive and dangerous. While tests exist and are commonly used to help predict which form a patient may have, their usefulness is limited. Han-Yu Chuang, a Ph.D. candidate in bioinformatics and systems biology program in the department of bioengineering in the UC San Diego Jacobs School of Engineering, senior author Thomas Kipps, M.D., Ph.D., professor of medicine and deputy director for research at the Moores UCSD Cancer Center, and their colleagues analyzed the activity and patterns of gene expression in cancer cells from 126 patients with aggressive or slow-growing CLL. The researchers, using complex algorithms, matched these gene activity profiles with a huge database of 50,000 known protein complexes and signaling pathways among nearly 10,000 genes/proteins, searching for "subnetworks" of aggregate gene expression patterns that separated groups of patients. They found 30 such gene subnetworks that, they say, were better in predicting whether a disease is aggressive or slow-growing than current techniques based on gene expression alone. They presented their results Monday, December 8, 2008 at the annual meeting of the American Society of Hematology in San Francisco. "We wanted to integrate the gene expression from the disease and a large network of human protein interactions to reconstruct the pathways involved in disease progression," Chuang explained. "By introducing the relevant pathway information, we can do a better job in prognosis." Chuang, co-author Trey Ideker, Ph.D., professor of bioengineering at UCSD, and their co-workers have previously shown the potential of this method in predicting breast cancer metastasis risk. "When you are analyzing just the gene expression, you are analyzing it in isolation," Chuang explained. "Genes act in concert and are functionally linked together. We have suggested that it makes more sense to analyze the genes' expression in a more mechanistic view, based on information about genes acting together in a particular pathway. We are looking for new markers - no longer individual genes - but a set of co-functional, interconnected genes," she said. "We would like to be able to model treatment-free survival." The current work is "proof of principle," Chuang said. Clinical trials will be needed to validate whether specific subnetworks of genes can actually predict disease CLL progression in patients. She thinks that the subnetworks can be used to provide "small scale biological models of disease progression," enabling researchers to better understand the process. Eventually, she said, a diagnostic chip might be designed to test blood samples for such genetic subnetworks that indicate the likely course of disease. The involved biological pathways could be drug targets as well. The American Cancer Society estimates that, in 2008, there will be about 15,110 new cases of CLL in the United States, with about 4,390 deaths from the disease. Laura Rassenti, Ph.D., UCSD, was also a co-author on the study. The Moores UCSD Cancer Center is one of the nation's 41 National Cancer Institute-designated Comprehensive Cancer Centers, combining research, clinical care and community outreach to advance the prevention, treatment and cure of cancer. For more information, visit www.cancer.ucsd.edu.
<urn:uuid:1616e0f8-9657-4124-92ae-d314570dc1e0>
seed
VANCOUVER – Researchers at St. Paul’s Hospital and Vancouver General Hospital are developing a revolutionary new test to diagnose and facilitate treatment of organ rejection in transplant patients. The $9.1-million Vancouver-based study, called Better Biomarkers of Acute and Chronic Allograft Rejection Project, led by Drs. Bruce McManus, Paul Keown and Rob McMaster, is jointly funded by Genome Canada, Genome BC, Novartis Pharmaceuticals and IBM. The project is believed to be the largest study of its kind ever performed in Canada and will focus on patients who have undergone liver, heart and kidney transplants. The project leaders will make a plenary presentation about their work at the eighth annual British Columbia Transplant Research Day, to be held Thursday, December 9, 2004 at the Chan Auditorium, Children’s and Women’s Health Centre of BC. Patients with end-stage vital organ failure depend on transplantation, but the process has its remaining challenges. Immune cells that normally protect patients can cause rejection and destruction of the very organ intended to save their life. To test for rejection, patients must undergo uncomfortable and invasive biopsies. Patients must also take drugs that inhibit rejection by suppressing the immune response, and which can have serious side effects. Project researchers seek to define which biomarkers—for example, substances found in the blood or other body fluidsΎcan be used as a diagnostic and prognostic test for organ rejection and immunosuppressive therapy response. Being able to monitor and predict rejection using a simple blood test will significantly reduce intrusive and expensive diagnostic procedures. “One of the major problems facing clinical caregivers in the management of organ rejection is determining whether a transplanted organ is undergoing rejection,” says Dr. Bruce McManus of the James Hogg iCAPTURE Centre, based at St. Paul’s Hospital, and co-leader of the project. “Most of the current methods for detecting rejection require tissue biopsies. These procedures may cause emotional and physical discomfort to patients and may result in findings that are inconclusive.” Project co-lead Dr. Paul Keown of the Vancouver Coastal Health Research Institute, VGH site, says, “In order to prevent organ rejection, powerful drugs are used to suppress a patient's immune system. Such therapies reduce the probability that the patient's own body will attack the transplanted organ, but impairing the immune system may leave the patient susceptible to infections and malignancies, and may damage the precious transplanted organs.” Individual patients vary in their response to immunosuppression therapy. It is this variation that project researchers, using the most advanced genomic (study of genes), proteomic (study of proteins) and bioinformatic (information science) tools available, will seek to better understand. “These new tools are critical in order to produce an affordable, accurate, and widely useful test to determine whether rejection is occurring and how a patient’s transplanted organ is faring,” says Dr. Rob McMaster, project co-lead, Director of the Immunity and Infection Research Centre at the Vancouver Coastal Health Research Institute, and Director of Transplant Immunology Research for the BC Transplant Society. Understanding the different responses patients have to immunosuppressive therapy will also help physicians balance the necessity of the therapy with its possible side effects. Personalized therapy could help reduce the enormous economic burden of over-prescribing immunosuppressive drugs. All three co-leaders of the Better Biomarkers of Acute and Chronic Allograft Rejection Project are faculty members at the University of British Columbia. This project is funded for three years by Genome Canada through Genome BC and private sector partners Novartis Pharmaceuticals and IBM. Other partners include Providence Health Care, the Vancouver General Hospital Foundation, St. Paul’s Hospital Foundation, UBC, Genome BC, the James Hogg iCAPTURE Centre, BC Transplant Research Institute and Affymetrix. The research team includes international leaders in transplantation immunology, pathology, biochemistry, genomics, proteomics statistics, information science and clinical care. Cite This Page:
<urn:uuid:61b68b93-f1eb-4ad2-a0c2-aa78b16b21f1>
seed
In computational ethology, the measurement of optokinetic responses (OKR) is an established method to determine thresholds of the visual system in various animal species. Wide-field movements of the visual environment elicit the typical body, head and eye-movements of optokinetic responses. Experimentally, usually regular patterns, e.g. black and white stripes, are moved continuously. Variation of stimulus parameters like contrast, spatial frequency and movement velocity allows to determine visual thresholds. The measurement of eye-movements is the most sensitive method to quantify optokinetic responses, but typically requires the fixation of the head by invasive surgery. Hence the measurement of head-movements is often used alternatively to rapidly measure the behavior of many individuals. While an animal performs these experiments, a human observer decides for each stimulus presentation if a tracking reaction was observed or not . Since responses of the animals typically are not recorded, off-line analysis and the evaluation of other response characteristics is not possible. We developed a method to automatically quantify OKR behavior based on the head movement in small vertebrates. For this purpose, we built a system consisting of a visual 360° panorama stimulation realized by four LCD monitors and a camera, positioned above the animal to record the head movements. A tracking algorithm retrieves the angle of the animal’s head. Here, we present a method for automated detection of tracking behavior based on the difference between the angular velocities of head and stimulus movement. Tracking performance is measured as the amount of time the animal performs head movements corresponding to the stimulus movement for more than 1s. For the optokinetic responses of mice we show that the tracking time decreases with increasing spatial frequency of a sinusoidal stimulus pattern (Fig 1). While a human observer was not able to detect tracking movements for spatial frequencies > 0.44 cyc/deg, the automated method revealed a certain amount of tracking behavior also at higher spatial frequencies. Thus, we were able to increase the sensitivity of the non-invasive measurement of optokinetic head movements into a sensitivity range that formerly required the measurement of eye movements. Figure 1. A: Head movements in response to sinusoidally moving stimuli of two different spatial frequencies. Red: Sequences, which were automatically identified as tracking behavior. B: Automatically identified tracking behavior at different spatial frequencies (blue: median, N=12) in comparison to random head movements in absence of a stimulus (red line: median, dashed: standard deviation) and to the threshold detected by a human observer (green). Supported by German research foundation research unit DFG-FOR701.
<urn:uuid:273ecec9-6e61-49db-b610-9d18266be34f>
seed
Transverse section of a portion of the spleen. (Spleen pulp labeled at lower right.) ||This article needs attention from an expert in Anatomy. (November 2008)| The red pulp of the spleen is composed of connective tissue known as the cords of Billroth and many splenic sinuses that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells. The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma. The splenic sinuses of the spleen, also known as sinusoids, are wide vessels that drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to aged red blood cells, the sinusoids also filter out particles that could clutter up the bloodstream, such as nuclear remnants, platelets, or denatured hemoglobin. Cells found in red pulp Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood: - White corpuscles are found to be in larger proportion than they are in ordinary blood. - Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior. - The cells of the reticulum each possess a round or oval nucleus, and like the splenic cells, they may contain pigment granules in their cytoplasm; they do not stain deeply with carmine, and in this respect differ from the cells of the Malpighian corpuscles. - In the young spleen, macrophages may also be found, each containing numerous nuclei or one compound nucleus. - Nucleated red-blood corpuscles have also been found in the spleen of young animals. In lymphoid leukemia, the white pulp of the spleen hypertrophies and the red pulp shrinks. In some cases the white pulp can swell to 50% of the total volume of the spleen. In myeloid leukemia, the white pulp atrophies and the red pulp expands. - Luiz Carlos Junqueira and José Carneiro (2005). Basic histology: text & atlas. McGraw-Hill Professional. pp. 274–277. ISBN 0-07-144091-7. - Michael Schuenke, Erik Schulte, Udo Schumacher, Lawrence M. Ross, Edward D. Lamperti (2006). Atlas of anatomy: neck and internal organs. Thieme. p. 219. ISBN 1-58890-360-5. - Victor P. Eroschenko, Mariano S. H. di Fiore (2008). Di Fiore's atlas of histology with functional correlations. Lippincott Williams & Wilkins. p. 208. ISBN 0-7817-7057-2. - Carl Pochedly, Richard H. Sills, Allen D. Schwartz (1989). Disorders of the spleen: pathophysiology and management. Informa Health Care. pp. 7–15. ISBN 0-8247-7933-9. - Cormack, David H. (2001). Essential histology. Lippincott Williams & Wilkins. pp. 169–170. ISBN 0-7817-1668-3. - Jan Klein, Václav Hořejší (1997). Immunology. Wiley-Blackwell. p. 30. ISBN 0-632-05468-9. - Anatomy Atlases - Microscopic Anatomy, plate 09.175 - "Spleen: Red Pulp" - red+pulp at eMedicine Dictionary - Diagram at kctcs.edu - Description and diagram at apsu.edu
<urn:uuid:50bc02c1-f70b-4df3-bf63-42bffe7702e3>
seed
Chronic rhinosinusitis (CRS) is an important public health problem and has a major impact on the quality of life [1–4]. The studies of CRS have been limited by access to tissue, the complexity of the sinonasal physiology, a lack of available biomarkers, the absence of useful animal models, a paucity of cohorts with biological samples for analysis, and limited well designed clinical trials or investigations of the immune function. Therefore, novel strategies for identifying biological mechanisms underlying this disease are much needed. A number of studies have shown abnormalities in immune responses in CRS , and recent studies have highlighted the close relationship between the epithelium and immune system. Indeed, Th1, Th2, Th17, T regulatory (Treg) cells, neutrophils, and eosinophils as well as their associated cytokines and mediators have all been implicated in the resultant cellular and molecular immunopathology [6–13]. Moreover, common to these studies is the conclusion that the dysregulation of immune responses contributes to the striking inflammation seen . Factors that affect these responses may be involved in the persistent inflammation in CRS. If we could understand what stimuli generate the hyperinflammatory responses in the sinonasal mucosa, we could better understand what factors affect this process. The aforementioned work has been performed under the prevailing framework of CRS as an inflammatory disease. Early efforts toward identifying the genesis of this inflammatory response initially focused on bacteria using culture techniques to assess the presence of organisms. Although studies show that some patients yield positive cultures from sinus samples, it has been difficult to generate support for this idea because a high proportion of CRS cultures fail to grow any organisms or yield bacteria with no obvious relationship to disease. The variability in recovery may result from differences in the methodologies used for collection, processing, transportation and cultivation, patient heterogeneity, geographical characteristics, and prior therapy, but also reflects the fact that many organisms do not grow in culture. One must consider that culture-based methods of assessing the presence of bacteria fail to recognize a large set of organisms that are present in the human body. For example, gut cultivation-based technologies limit analysis because more than 80% of the estimated species are not readily cultivated . The cultures also fail to provide information on the communal composition and structure, phenotype, function, and gene expression of these organisms in their natural habitats. Therefore, we cannot discount the hypothesis that organisms from the sinuses that do not grow in culture influence the inflammation in CRS, either directly or indirectly, through effects on other organisms involved in disease. Similarly, environmental factors, such as smoking may affect the composition of these bacteria. Studies on the total genomes of bacteria inhabiting the body, termed the microbiome, have recently been performed at National Institutes of Health. To elucidate the normal human microbiome, samples were collected from 242 healthy Americans (129 men and 113 women) [15▪▪]. Tissue samples were collected from 15 and 18 sites of the body from men and women, respectively. A maximum of three samples were collected from the oral and nasal cavities, skin (posterior regions of the bilateral ears and medial sides of the bilateral elbows), lower intestine (feces), and female vagina. Total microbial DNA was purified from each sample and analyzed using a DNA base sequencer. To identify the bacteria-specific gene (variable ribosomal RNA gene termed 16S rRNA), 3.5 trillion base pairs of genomic sequence data were arranged. Only bacterial DNA base sequences could be analyzed without analyzing human genomic DNA sequences. In addition, metabolic activities encoded by these microbial genes could be analyzed by metagenomic sequencing (or determination of the total DNA base sequences of the microorganisms). This Human Microbiome Project (HMP) demonstrated the presence of more than 10 000 microbial species in the human body, and it was estimated that 81–99% of the total microbial species inhabiting healthy individuals were identified. Analysis of the microbiome in various diseases has also been progressing. For example, on analysis of the microbiome in the nasal cavity of children with fever of unknown origin, which frequently occurs in children, a maximum of five times larger amount of viral DNA was detected in nasal cavity samples from children with fever than in those without fever, and DNA of a broad range of viral species was detected [16▪]. Many studies have recently been performed on the function of the intestinal microbiome in Crohn's disease [17▪], ulcerative colitis [18▪], and esophageal cancer, function of the skin microbiome in psoriasis, atopic dermatitis, and immunodeficiency, function of the urogenital microbiome in pregnancy [19▪], sexual history, and surgery for phimosis, and function in many childhood diseases, such as pediatric abdominal pain and enteritis and the serious condition of premature babies with intestinal dysfunction [20▪]. There are still many unclear points with regard to the microbiome function in CRS, but allergic diseases and asthma are essential factors that need to be investigated regarding the development of CRS, and to understand the association between allergic diseases and asthma and the microbiome function, it is very important to elucidate the pathology of CRS. The hygiene hypothesis is very important to consider the development of allergic diseases and asthma. In 1989, Strachan proposed that a lack of exposure to microbial environments in childhood because of the ‘clean environment’ accompanying the advancement in hygiene leads to the poor development of immunity and increases the risk of allergic diseases . This hygiene hypothesis is based on the observation that the risk of allergic diseases is low in children with many siblings and those who have grown up on farms with much contact with domestic animals. However, no direct relationship between infection and allergy was shown at that time, and this was a simplified observation. Later, a lower incidence of allergic and autoimmune diseases in individuals with numerous exposures to parasites was demonstrated by epidemiological and experimental studies . In 1998, it was clarified that changes in the intestinal bacterial composition in childhood caused by changes in lifestyle influence the tolerability of mucosal immunity and induce ‘distorted’ immune reactions, connecting the hygiene hypothesis with the relationship with microorganisms . Two cross-sectional surveys have recently been performed in a total of 16 000 children, and it was clarified that the development of asthma was inhibited in children with a more marked exposure to bacteria and fungi [24▪▪]. ALLERGIC DISEASES/ASTHMA AND INTESTINAL BACTERIA Putting it simply, immunotolerance represents the biological capacity to identify antigens and exhibit defensive actions against these. Collapse of this tolerance is closely involved in the pathogenesis of various diseases. Acquired immunity plays an important role in the differentiation of self and nonself, but intestinal bacteria contain abundant nonself antigens including those derived from food regulating the body, and constantly present antigens to acquired immunity. Symbiotic microorganisms in the intestine develop a mechanism of inhibiting unnecessary inflammation by inducing immunotolerance through coordination with the natural immune system or regulation of the acquired immune system . Allergic diseases and asthma are considered to be induced by excess reactions of Th2 cells. Th2 cells are characterized by IL-4, IL-5, IL-9, and IL-13 production, and these cytokines form and modulate the pathology of allergic inflammation. Not only Th2 cells but also Th1 cells are involved in the pathogenesis of asthma . Studies have shown the increasing role of Th17 and Th9 cells in asthma [27–29]. Tregs are important for the regulation of immunotolerance and play an important role in the modulation of inflammatory reactions [30–32]. Toll-like receptors and nucleotide binding oligomerization domain-like receptors are expressed in the intestinal epithelium, and dendritic cells are activated through these and regulate immunotolerance. It has been shown that intestinal bacteria, such as Lactobacillus and Bacteroides, promote Treg expression in the body and enhance the secretion of IL-10 and TGF-β [33,34]. Furthermore, it has been clarified that the oral ingestion of bacteria corrects the Th1/Th2 balance and promotes differentiation into Th17 cells, suggesting that Th17 cells are involved in defense against pathogens reaching the intestine . The close involvement of these immunocytes in changes in intestinal bacteria has been shown, indicating that the intestinal bacterial balance is strongly associated with the development of allergic diseases and asthma. EFFECTS OF MICROBIOME ON CHRONIC RHINOSINUSITIS It has been clarified that various factors are involved in the pathogenesis of CRS [37▪]. For example, the relationship between CRS with nasal polyps (CRSwNP) and IgE for Staphylococcus aureus super antigen, as well as Th1/Th2 imbalance due to a decrease in Tregs is widely known. In addition, CRSwNP is closely associated with the development of asthma, based on which it is easy to imagine that the intestinal bacterial balance is involved in the development of CRS. However, there has been no report on the association between CRS and the intestinal bacterial microbiome. The association between CRS and bacteria in the nasal cavity and paranasal sinuses has been investigated in many studies, but gene analysis of these bacteria, that is, the microbiome, has only been performed in only a few studies [38–40,41▪▪]. The first reported study was performed in 2003, in which a bacteria-specific gene, 16s rDNA, was amplified from the mucosa and maxillary sinus lavage of 11 patients with maxillary sinusitis. Bacterial genes were amplified in four patients and identified as S. aureus, gram-positives, gram-negatives, and anaerobes. However, no fungus was detected. Later, two articles were published in 2010. In one article, bacterial gene analysis was performed using the mucosal samples. Bacterial genes were amplified in all 18 patients, and S. aureus and a lot of Coagulase-negative staphylococci were detected in many samples, but more anaerobes were detected. The involvement of anaerobes in the development of CRS has been suggested, and this study demonstrated it at the gene level. In contrast, maxillary sinus lavage was analyzed, and a total of 142 bacterial genes were amplified including many genes of indigenous bacteria in the oral cavity. In a recently reported study, cotton swabs of 15 CRS patients were analyzed, and more than 50 000 bacterial genes were detected in total. It was clarified that the incidence of asthma and the association with the diversity of the S. aureus gene increased as the diversity of bacterial genes in the samples increased. The relationship with S. aureus super antigen, which is considered to be the cause of CRS, and the development of asthma is also assumed on the basis of these viewpoints. In the present review, we outlined existing information on CRS, allergic diseases, asthma, and the microbiome. There is still much to do in order to improve our understanding of the role this factor plays in disease development with regard to CRS. We need to understand the relationship of CRS not only with the intestinal bacterial microbiome but also with the microbiome in the nasal cavity and paranasal sinuses better, because of the difficulty in collecting samples from the paranasal sinuses without contamination, and inconsistency of the analytical methods due to the fact that no sampling methods have been established. As the recent progression in analytical methods has facilitated the investigation of microbiomes, further studies investigating the relationship between CRS and asthma in regards to the microbiome will be warranted. Conflicts of interest There are no conflicts of interest. REFERENCES AND RECOMMENDED READING Papers of particular interest, published within the annual period of review, have been highlighted as: ▪ of special interest ▪▪ of outstanding interest Additional references related to this topic can also be found in the Current World Literature section in this issue (pp. 120–121). 1. Benninger MS, Ferguson BJ, Hadley JA, et al. Adult chronic rhinosinusitis: definitions, diagnosis, epidemiology, and pathophysiology. Otolaryngol Head Neck Surg 2003; 129:S1–S32. 2. Anand VK. Epidemiology and economic impact of rhinosinusitis. Ann Otol Rhinol Laryngol Suppl 2004; 193:3–5. 3. Senior BA, Glaze C, Benninger MS. Use of the Rhinosinusitis Disability Index (RSDI) in rhinologic disease. Am J Rhinol 2001; 15:15–20. 4. Lund VJ. Impact of chronic rhinosinusitis on quality of life and healthcare expenditure. Clin Allergy Immunol 2007; 20:15–24. 5. Kern RC, Conley DB, Walsh W, et al. Perspectives on the etiology of chronic rhinosinusitis: an immune barrier hypothesis. Am J Rhinol 2008; 22:549–559. 6. Van Cauwenberge P, Van Hoecke H, Bachert C. Pathogenesis of chronic rhinosinusitis. Curr Allergy Asthma Rep 2006; 6:487–494. 7. Van Bruaene N, Perez-Novo CA, Basinski TM, et al. T-cell regulation in chronic paranasal sinus disease. J Allergy Clin Immunol 2008; 121:1435–1441. 8. Lane AP, Truong-Tran QA, Schleimer RP. Altered expression of genes associated with innate immunity and inflammation in recalcitrant rhinosinusitis with polyps. Am J Rhinol 2006; 20:138–144. 9. Ramanathan M Jr, Lee WK, Spannhake EW, et al. Th2 cytokines associated with chronic rhinosinusitis with polyps down-regulate the antimicrobial immune function of human sinonasal epithelial cells. Am J Rhinol 2008; 22:115–121. 10. Ramanathan M Jr, Spannhake EW, Lane AP. Chronic rhinosinusitis with nasal polyps is associated with decreased expression of mucosal interleukin 22 receptor. Laryngoscope 2007; 117:1839–1843. 11. Schleimer RP, Kato A, Kern R, et al. Epithelium: at the interface of innate and adaptive immune responses. J Allergy Clin Immunol 2007; 120:1279–1284. 12. Schleimer RP, Lane AP, Kim J. Innate and acquired immunity and epithelial cell function in chronic rhinosinusitis. Clin Allergy Immunol 2007; 20:51–78. 13. Fokkens W, Lund V, Mullol J. EP3OS 2007: European position paper on rhinosinusitis and nasal polyps 2007: a summary for otorhinolaryngologists. Rhinology 2007; 45:97–101. 14. Eckburg PB, Bik EM, Bernstein CN, et al. Diversity of the human intestinal microbial flora. Science 2005; 308:1635–1638. 15▪▪. The Human Microbiome Project Consortium. A framework for human microbiome research. Nature 2012; 486:215–221 The newest findings of HMP project. From 242 adults, 5177 microbial taxonomic profiles from 16S ribosomal RNA genes and over 3.5 terabases of metagenomic sequence have been generated. 16▪. Wylie KM, Mihindukulasuriya KA, Sodergren E, et al. Sequence analysis of the human virome in febrile and afebrile children. PLoS ONE 2012; 7:e27735. On average nasopharynx and plasma samples from febrile children contained 1.5-fold to 5-fold more viral sequences, respectively, than samples from afebrile children. 17▪. Wu GD, Chen J, Hoffmann C, et al. Linking long-term dietary patterns with gut microbial enterotypes. Science 2011; 334:105–108. Fecal communities clustered into enterotypes distinguished primarily by levels of bacteroides and prevotella. Enterotypes were strongly associated with long-term diets. 18▪. Zella GC, Hait EJ, Glavan T, et al. Distinct microbiome in pouchitis compared to healthy pouches in ulcerative colitis and familial adenomatous polyposis. Inflamm Bowel Dis 2011; 17:1092–1100. The pouch microbial environment appears to be distinctly different in the settings of ulcerative colitis associated pouchitis, healthy ulcerative colitis, and familial adenomatous polyposis. 19▪. Ravel J, Gajer P, Abdo Z, et al. Vaginal microbiome of reproductive-age women. Proc Natl Acad Sci USA 2011; 108 (Suppl 1):4680–4687. The proportions of each community group varied among the four ethnic groups, Lactobacillus iners, L. crispatus, L. gasseri, or L. jensenii and these differences were statistically significant. 20▪. Mai V, Young CM, Ukhanova M, et al. Fecal microbiota in premature infants prior to necrotizing enterocolitis. PLoS One 2011; 6:e20647. Abnormal patterns of microbiota and potentially a novel pathogen contribute to the cause of Necrotizing Entero Colitis. 21. Strachan DP. Hay fever, hygiene, and household size. BMJ 1989; 299:1259–1260. 22. Maizels RM. Exploring the immunology of parasitism – from surface antigens to the hygiene hypothesis. Parasitology 2009; 136:1549–1564. 23. Wold AE. The hygiene hypothesis revised: is the rising frequency of allergy due to changes in the intestinal flora? Allergy 1998; 53:20–25. 24▪▪. Ege MJ, Mayer M, Normand AC, et al. Exposure to environmental microorganisms and childhood asthma. N Engl J Med 2011; 364:701–709. It was clarified that the development of asthma was inhibited in children with more marked exposure to bacteria and fungi. 25. Lee YK, Mazmanian SK. Has the microbiota played a critical role in the evolution of the adaptive immune system? Science 2010; 330:1768–1773. 26. Kero J, Gissler M, Hemminki E, et al. Could TH1 and TH2 diseases coexist? Evaluation of asthma incidence in children with coeliac disease, type 1 diabetes, or rheumatoid arthritis: a register study. J Allergy Clin Immunol 2001; 108:781–783. 27. Molet S, Hamid Q, Davoine F, et al. IL-17 is increased in asthmatic airways and induces human bronchial fibroblasts to produce cytokines. J Allergy Clin Immunol 2001; 108:430–438. 28. Cheng G, Arima M, Honda K, et al. Antiinterleukin-9 antibody treatment inhibits airway inflammation and hyperreactivity in mouse asthma model. Am J Respir Crit Care Med 2002; 166:409–416. 29. Wang YH, Voo KS, Liu B, et al. A novel subset of CD4(+) T(H)2 memory/effector cells that produce inflammatory IL-17 cytokine and promote the exacerbation of chronic allergic asthma. J Exp Med 2010; 207:2479–2491. 30. Karlsson MR, Rugtveit J, Brandtzaeg P. Allergen-responsive CD4+CD25+ regulatory T cells in children who have outgrown cow's milk allergy. J Exp Med 2004; 199:1679–1688. 31. Provoost S, Maes T, van Durme YM, et al. Decreased FOXP3 protein expression in patients with asthma. Allergy 2009; 64:1539–1546. 32. Wei W, Liu Y, Wang Y, et al. Induction of CD4+CD25+Foxp3+IL-10+ T cells in HDM-allergic asthmatic children with or without SIT. Int Arch Allergy Immunol 2010; 153:19–26. 33. Round JL, Mazmanian SK. Inducible Foxp3+ regulatory T-cell development by a commensal bacterium of the intestinal microbiota. Proc Natl Acad Sci USA 2010; 107:12204–12209. 34. Ly NP, Ruiz-Perez B, Onderdonk AB, et al. Mode of delivery and cord blood cytokines: a birth cohort study. Clin Mol Allergy 2006; 4:13. 35. Ghadimi D, Folster-Holst R, de Vrese M, et al. Effects of probiotic bacteria and their genomic DNA on TH1/TH2-cytokine production by peripheral blood mononuclear cells (PBMCs) of healthy and allergic subjects. Immunobiology 2008; 213:677–692. 36. Ivanov II, Atarashi K, Manel N, et al. Induction of intestinal Th17 cells by segmented filamentous bacteria. Cell 2009; 139:485–498. 37▪. Van Crombruggen K, Zhang N, Gevaert P, et al. Pathogenesis of chronic rhinosinusitis: inflammation. J Allergy Clin Immunol 2011; 128:728–732. This review focuses on recent evidence that sheds new light on our current knowledge regarding the inflammatory process of CRS to further unravel its pathogenesis. 38. Paju S, Bernstein JM, Haase EM, et al. Molecular analysis of bacterial flora associated with chronically inflamed maxillary sinuses. J Med Microbiol 2003; 52:591–597. 39. Stephenson MF, Mfuna L, Dowd SE, et al. Molecular characterization of the polymicrobial flora in chronic rhinosinusitis. J Otolaryngol Head Neck Surg 2010; 39:182–187. 40. Lewenza S, Charron-Mazenod L, Cho JJ, et al. Identification of bacterial contaminants in sinus irrigation bottles from chronic rhinosinusitis patients. J Otolaryngol Head Neck Surg 2010; 39:458–463. 41▪▪. Feazel LM, Robertson CE, Ramakrishnan VR, et al. Microbiome complexity and Staphylococcus aureus in chronic rhinosinusitis. Laryngoscope 2012; 122:467–472. This is the first report in which bacterial DNA in the nose was analyzed by pyrosequence. Greater abundance of S. aureus may characterize the disease state of CRS. © 2013 Lippincott Williams & Wilkins, Inc.
<urn:uuid:2a9f929b-445d-4bc0-acfc-5a5d158b7160>
seed
Stay informed of the latest progress in canine health research. We need your support to fund research that helps dogs live longer, healthier lives. Cardiomyopathy is a serious disease in which the heart muscle becomes inflamed and doesn't work as it should. The term "cardiomyopathy" literally means "sick heart muscle." There are two types of cardiomyopathy in dogs: Hypertrophic Cardiomypathy, and Dilated Cardiomyopthy. In Hypertrophic Cardiomyopathy the walls of the chambers of the heart thicken, leading to a decrease in pumping efficiency. This form of cardiac failure is quite rare in dogs. In Dilated Cardiomyopathy (DCM) the chambers of the heart increase in size and the muscles that form the walls of the heart stretch thinner. Canine DCM is one of the causes of Congestive Heart Failure (CHF) and is the more common type of cardiomyopathy in dogs. DCM usually affects the left side of the heart. DCM in the right side of the heart can occur, but is much rarer. In some dogs DCM affect both sides of the heart. Large and giant breeds are at greater rick of developing DCM, including Doberman Pinschers, Irish Wolfhounds, Scottish Deerhounds, Boxers, Afghan Hounds, Old English Sheepdogs, Great Danes, Dalmatians, Newfoundlands, and Saint Bernards. English and American Cocker Spaniels and Portuguese Water Dogs also develop DCM. The average age of onset is 4 to 10 years, although Portuguese Water Dogs can acquire the disease at a very young age. DCM is very serious and the mortality rate, even of treated cases, is very high. The vast majority of cases of DCM are idiopathic (having no known cause,) but certain breeds appear to have an inherited predisposition. Other possible causes may include: There is no way to prevent DCM; however, early screening of dogs of breeds that have a high incidence of DCM may help identify important changes prior to the onset of signs. Affected dogs should not be bred. Early in the disease process there may be no clinical signs. However, there are signs you can look for: Any of these symptoms may mean your dog needs emergency care. A cardiac exam by a veterinarian can detect abnormal heart sounds (when present) and many signs of heart failure. Diagnostic tests are needed to recognize Dilated Cardiomyopathy and exclude all other diseases. Other tests to exclude other diseases include: Your veterinarian may recommend additional diagnostic tests to exclude or diagnose other conditions, including: In some cases, a heart murmur (usually soft), other abnormal heart sounds, and/or irregular heart rhythm may be heard on examination. This is more likely as the disease progresses. There is no cure for DCM. Treatment is aimed at controlling symptoms and delaying the onset of heart failure. Dogs may be treated at home with a combination of the following medications: Dogs that present to the veterinarian in heart failure are treated with oxygen therapy in addition to Furosemide. Therapy is always tailored to the needs of the dog. Since this disease is not reversible and heart failure tends to be progressive, the intensity of therapy (for example, the number of medicines and the dosages used) usually must be increased over time. Administer prescribed medications as directed by your veterinarian. Watch for difficulty in breathing, increase in coughing, lethargy or sudden inability to use one or more limbs. Observe the breathing rate when your pet is relaxing. Schedule regular veterinary visits to monitor your dog?s condition. The AKC Canine Health Foundation has funded 13 grants to study dilated cardiomyopathy. These grants are meant to increase the current knowledge about the inheritance and genetic causes of dilated cardiomyopathy, as well as novel treatment methods. Participate in canine health research by providing samples or by enrolling in a clinical trial. Samples are needed from healthy dogs and dogs affected by specific diseases. Search our research portfolio to see what research we are funding on a particular disease. When you make a memorial donation we will post a photo of your dog to our Celebration Wall gallery. Why do you support canine health research? What motivates you in the fight against canine disease? We want to hear from you.
<urn:uuid:9d60fb3b-8915-4022-ab5f-346074215ea8>
seed
Operating room fires are sentinel events that present a real danger to surgical patients and occur at least as frequently as wrong-sided surgery. For fire to occur, the 3 points of the fire triad must be present: an oxidizer, an ignition source, and fuel source. The electrosurgical unit (ESU) pencil triggers most operating room fires. Carbon dioxide (CO2) is a gas that prevents ignition and suppresses fire by displacing oxygen. We hypothesize that a device can be created to reduce operating room fires by generating a cone of CO2 around the ESU pencil tip. One such device was created by fabricating a divergent nozzle and connecting it to a CO2 source. This device was then placed over the ESU pencil, allowing the tip to be encased in a cone of CO2 gas. The device was then tested in 21%, 50%, and 100% oxygen environments. The ESU was activated at 50 W cut mode while placing the ESU pencil tip on a laparotomy sponge resting on an aluminum test plate for up to 30 seconds or until the sponge ignited. High-speed videography was used to identify time of ignition. Each test was performed in each oxygen environment 5 times with the device activated (CO2 flow 8 L/min) and with the device deactivated (no CO2 flow-control). In addition, 3-dimensional spatial mapping of CO2 concentrations was performed with a CO2 sampling device. The median ± SD [range] ignition time of the control group in 21% oxygen was 2.9 s ± 0.44 [2.3–3.0], in 50% oxygen 0.58 s ± 0.12 [0.47–0.73], and in 100% oxygen 0.48 s ± 0.50 [0.03–1.27]. Fires were ignited with each control trial (15/15); no fires ignited when the device was used (0/15, P < 0.0001). The CO2 concentration at the end of the ESU pencil tip was 95%, while the average CO2 concentration 1 to 1.4 cm away from the pencil tip on the bottom plane was 64%. In conclusion, an operating room fire prevention device can be created by using a divergent nozzle design through which CO2 passes, creating a cone of fire suppressant. This device as demonstrated in a flammability model effectively reduced the risk of fire. CO2 3-dimensional spatial mapping suggests effective fire reduction at least 1 cm away from the tip of the ESU pencil at 8 L/min CO2 flow. Future testing should determine optimum CO2 flow rates and ideal nozzle shapes. Use of this device may substantially reduce the risk of patient injury due to operating room fires.
<urn:uuid:c0710a05-07be-411e-82bc-479d85c3b089>
seed
October 19, 2011 Next-generation Database of Genomic Variants launches Version 2 of the Database of Genomic Variants (DGV) launches this week. DGV – also known as “the Toronto Database” – is a public resource that facilitates the translation of genomic information into new diagnostic, prognostic and therapeutic tools for improving health. DGV was initially created in 2004, as an initiative of The Centre for Applied Genomics (TCAG) at The Hospital for Sick Children (SickKids) and the University of Toronto’s McLaughlin Centre. DGV is the most comprehensive international repository that houses human genomic copy number and structural variants. DGV provides significant support for thousands of clinical diagnostic centres around the world. The new and improved database will expand functions of DGV. The new database is found at http://dgvbeta.tcag.ca/dgv/app/home. "DGV continues to grow in popularity and impact. We polled our users to guide DGV’s expanded scope, while maintaining the simplicity of the original database,” says Dr. Stephen Scherer, Director of TCAG at SickKids and Director of the McLaughlin Centre at the University of Toronto. Sequencing of the human genome has resulted in discoveries about the differences in the DNA of individuals and their relationship to the uniqueness of the human species. Only a few years ago, an international team of scientists that included Scherer's lab discovered that certain genes can be present in aberrant copy numbers, with others being structurally altered, in some individuals but not in others. These copy number variations (CNVs) and structural alterations have been shown to influence susceptibility to disease and response to treatments. The new DGV will expand its content to include genomic variants from genome sequencing experiments through a unique partnership with the European Bioinformatics Institute (EBI) and National Center for Biotechnology Information (NCBI). DGV has also implemented new interactive query tools and interfaces for viewing complex data originating from genome scanning experiments. "Rapid advances in DNA sequencing technologies to identify genetic variations in important genes are impacting all clinical disciplines. DGV is already widely used by clinical and laboratory geneticists to distinguish benign from pathogenic structural variation. We are poised at the brink of an era of genomic medicine and the new DGV will enable access to all of this important data supporting thousands of clinical diagnoses around the world," says Dr. Bridget Fernandez, clinical geneticist and President of the Canadian College of Medical Geneticists. DGV is supported by the McLaughlin Centre, Genome Canada, the Ontario Genomics Institute, the Canadian Institutes of Health Research (CIHR) and SickKids Foundation. Scherer is a Fellow of the Canadian Institute for Advanced Research (CIFAR) and holds the GlaxoSmithKline/CIHR Pathfinder Chair in Genome Sciences at SickKids and the University of Toronto. About The Hospital for Sick Children The Hospital for Sick Children (SickKids) is recognized as one of the world’s foremost paediatric health-care institutions and is Canada’s leading centre dedicated to advancing children’s health through the integration of patient care, research and education. Founded in 1875 and affiliated with the University of Toronto, SickKids is one of Canada’s most research-intensive hospitals and has generated discoveries that have helped children globally. Its mission is to provide the best in complex and specialized family-centred care; pioneer scientific and clinical advancements; share expertise; foster an academic environment that nurtures health-care professionals; and champion an accessible, comprehensive and sustainable child health system. SickKids is proud of its vision for Healthier Children. A Better World. For more information, please visit www.sickkids.ca. About SickKids Research & Learning Tower SickKids Research & Learning Tower will bring together researchers from different scientific disciplines and a variety of clinical perspectives, to accelerate discoveries, new knowledge and their application to child health — a different concept from traditional research building designs. The Tower will physically connect SickKids science, discovery and learning activities to its clinical operations. Designed by award-winning architects Diamond + Schmitt Inc. and HDR Inc. with a goal to achieve LEED® Gold Certification for sustainable design, the Tower will create an architectural landmark as the eastern gateway to Toronto’s Discovery District. SickKids Research & Learning Tower is funded by a grant from the Canada Foundation for Innovation and community support for the ongoing fundraising campaign. For more information, please visit www.buildsickkids.com. About the University of Toronto Established in 1827, the University of Toronto has assembled one of the strongest research and teaching faculties in North America, presenting top students at all levels with an intellectual environment unmatched in breadth and depth on any other Canadian campus. U of T faculty co-author more research articles than their colleagues at any university in the US or Canada other than Harvard. As a measure of impact, U of T consistently ranks alongside the top five U.S. universities whose discoveries are most often cited by other researchers around the world. The U of T faculty are also widely recognized for their teaching strengths and commitment to graduate supervision. www.utoronto.ca The Hospital for Sick Children The Hospital for Sick Children Phone: 416-813-7654 ext. 2059
<urn:uuid:9804ba69-ac91-4e81-bdd5-dc97c76c5160>
seed
An adrenergic agonist is a drug that stimulates a response from the adrenergic receptors. The five main categories of adrenergic receptors are: α1, α2, β1, β2, and β3, although there are more subtypes, and agonists vary in specificity between these receptors, and may be classified respectively. However, there are also other mechanisms of adrenergic agonism. Epinephrine and norepinephrine are endogenous and broad-spectrum. More selective agonists are more useful in pharmacology. An adrenergic agent is a drug, or other substance, which has effects similar to, or the same as, epinephrine (adrenaline). Thus, it is a kind of sympathomimetic agent. Alternatively, it may refer to something which is susceptible to epinephrine, or similar substances, such as a biological receptor (specifically, the adrenergic receptors). Directly acting adrenergic agonists act on adrenergic receptors. All adrenergic receptors are G-protein coupled, activating signal transduction pathways. The G-protein receptor can affect the function of adenylate cyclase or phospholipase C, an agonist of the receptor will upregulate the effects on the downstream pathway (it will not necessarily upregulate the pathway itself). The receptors are broadly grouped into α and β receptors. There are two subclasses of α-receptor - α1 and α2 which are further subdivided into α1A, α1B, α1D, α2A, α2B and α2C. The α2C receptor has been reclassed from α1C, due to its greater homology with the α2 class, giving rise to the somewhat confusing nomenclature. The β receptors are divided into β1, β2 and β3. The receptors are classed physiologically, though pharmacological selectivity for receptor subtypes exists and is important in the clinical application of adrenergic agonists (and, indeed, antagonists). From an overall perspective - α1 receptors activate phospholipase C (via Gq), increasing the activity of protein kinase C (PKC). α2 receptors inhibit adenylate cyclase (via Gi), decreasing the activity of protein kinase A (PKA). β receptors activate adenylate cyclase (via Gs), thus increasing the activity of PKA. Agonists of each class of receptor elicit these downstream responses. Uptake and storage Indirectly acting adrenergic agonists affect the uptake and storage mechanisms involved in adrenergic signalling. Two uptake mechanisms exist for terminating the action of adrenergic catecholamines - uptake 1 and uptake 2. Uptake 1 occurs at the presynaptic nerve terminal to remove the neurotransmitter from the synapse. Uptake two occurs at postsynaptic and peripheral cells to prevent the neurotransmitter from diffusing laterally. There is also enzymatic degradation of the catecholamines by two main enzymes - monoamine oxidase and catechol-o-methyl transferase. Respectively, these enzymes oxidise monoamines (including catecholamines) and methylate the hydroxal groups of the phenyl moiety of catecholamines. These enzymes can be targeted pharmacologically. Inhibitors of these enzymes act as indirect agonists of adrenergic receptors as they prolong the action of catecholamines at the receptors. In general, a primary or secondary aliphatic amine separated by 2 carbons from a substituted benzene ring is minimally required for high agonist activity. A great number of drugs are available which can affect adrenergic receptors. Each drug has its own receptor specificity giving it a unique pharmacological effect. Other drugs affect the uptake and storage mechanisms of adrenergic catecholamines, prolonging their action. The following headings provide some useful examples to illustrate the various ways in which drugs can enhance the effects of adrenergic receptors. These drugs act directly on one or more adrenergic receptors. According to receptor selectivity they are two types - -Non-selective : drugs act one or more receptors; these are: Adrenaline (almost all adrenergic receptors), Nor-adrenaline (acts on α1, α2 & β1), Isoprenaline (acts on β1, β2 & β3), Dopamine (acts on α1, α2, β1, D1 & D2) -Selective: drugs which act on a single receptor only; these are further classified into α selective & β selective. α1 selective: Phenylephrine, Methoxamine, Midodrine, Oxymetazoline α2 selective: α-Methyl dopa, clonidine β1 selective: Dobutamine β2 selective: Salbutamol, Albuterol, Terbutaline, Salmeterol, Formoterol, Pirbuterol. - Methylenedioxymethamphetamine (MDMA) - Siegel, George J et al. (2006). Basic Neurochemistry 7e. Elsevier. - Rang, H.P. et al. (2003). Pharmacology. Churchill Livingstone. - Medicinal Chemistry of Adrenergics and Cholinergics - Laurence Pharmacology 9th edition , page 450 - Katzung Pharmacology 12th edition page 131-133 - Lippincott's Pharmacology 5th edition page 69-85 - Virtual Chembook article on adrenergic drugs - adrenergic agonists at the US National Library of Medicine Medical Subject Headings (MeSH)
<urn:uuid:5268c45a-e470-490c-b58d-908ae8b277b5>
seed
- Last Update On : 2013-10-06 When alcohol is ingested, it is absorbed directly from the proximal small intestine and distributed throughout the entire fluid space of the body. After equilibrium is reached, alcohol will be found in all tissues of the body in proportion to their water content. Urine is the most practical specimen for alcohol testing in the workplace, when the purpose of testing is to demonstrate that alcohol consumption has occurred. Peak urine alcohol levels are reached 45 to 60 minutes after alcohol ingestion. At this time, urine alcohol levels are typically about 1.3 times greater than the corresponding blood alcohol concentration. This ratio is only valid during the elimination phase, which occurs after the blood alcohol level has peaked and is decreasing. Alcohol may be detected in the urine for 1 to 2 hours longer than it is detected in blood. The presence of alcohol in the urine indicates recent prior use, but may not correlate with the degree of intoxication observed at the time the specimen is provided. Increments of urine continuously pool in the bladder, and each contains a different amount of ethanol. The ethanol level from such a sample relates only to the average blood alcohol concentration during the time needed for the voided urine sample to accumulate in the bladder and not to the blood alcohol concentration at the time of collection. False negative results may be caused by the volatility of alcohol. Urine alcohol concentrations may decrease 10 to 25% during each hour that a urine sample remains uncapped prior to testing. Diabetic patients who are spilling glucose into their urine and have a urinary tract infection with a fermenting organism, like Candida albicans, may have a positive test even though they did not consume alcohol. Estimation of blood alcohol concentration from urine alcohol measurements is more reliable if two urine samples are collected about 30 minutes apart. The first urine is usually discarded and the second one is used to estimate blood alcohol concentration. The limitation of a single first-void urine sample is that one does not know over what time period the urine has collected in the bladder. However, subjects who have been drinking usually do not retain urine in their bladders for an extended time period because of the diuretic effect of alcohol. At least 35 states authorize urine alcohol measurements for driving related offenses. Some regulatory and legal agencies use a 1.5:1 urine to blood ratio, instead of 1.3:1, to be conservative and give the benefit of any doubt to the subject. Reference value is not detected. Specimen requirement is a freshly voided random urine sample of 25 mL.A Drug Screen Kit should be used for chain of custody testing.
<urn:uuid:e6845f24-eb57-4481-b278-d5b41743fcc2>
seed
The most common cause of death from breast cancer is not the primary tumor, but metastatic disease, when the cancer travels and takes root in the brain. About 1 in 5 women with metastatic breast cancer will contract a brain lesion, and median survival for those patients is less than a year after diagnosis. Yet physicians currently have few tests to predict which breast tumors will eventually involve the brain and which will not. As it becomes more accepted that no two patients’ cancers are alike, physicians recognize that they need more “biomarkers” that can both reliably predict how the disease will progress and suggest the best method of treatment. Just as successfully treating cancer often requires the cooperation of different disciplines, finding sufficiently predictive cancer biomarkers needs to be a collaborative effort. An ongoing University of Chicago Medicine search for a factor that can help physicians calculate the risk of brain metastasis in breast cancer patients has united researchers from neurosurgery, oncology, pathology, and Health Studies. The first fruit of that large collaboration, published late last year in the journal Cancer, discovered a promising biomarker with an innocuous name: KISS1. The interest in brain metastases started in the laboratory of Maciej Lesniak, professor of surgery and neurology and director of neurological oncology. Lesniak, who often treats patients with these types of brain tumors, said that there is a gap in knowledge about what predisposes some women to this serious complication of breast cancer. “If you have breast cancer, does this automatically mean that you will develop a brain metastasis? We don’t know,” Lesniak said. “Are there any risk factors or biological phenomena behind this form of the disease? That was the question that we set out to answer.” Fortunately, the means to test that question were available through the Specialized Program of Research Excellence (SPORE) in Breast Cancer at the University of Chicago Comprehensive Cancer Center, led by medical oncologist and Walter L. Palmer Distinguished Service Professor Olufunmilayo Olopade. The Breast Cancer SPORE maintains a bank of tissue and tumor samples that researchers could use to look for potential biomarkers. Working with Peter Pytel, assistant professor of pathology, the research team developed an assay to test levels of target proteins in tissue from metastatic and non-metastatic breast cancer patients. For the first potential biomarker, the research team led by Ilya Ulasov chose KISS1, levels of which were previously associated with the progression of bladder, ovarian, and other cancer types. Using antibody staining techniques, the researchers measured KISS1 levels in breast tissue from patients with cancer, non-cancerous breast tissue, and brain lesions from metastatic cancer patients. The comparison found lower levels of KISS1 protein in the brain metastases relative to breast tumors, suggesting that a reduction of this protein is associated with increased spread of cancer to the brain. Another analysis correlated KISS1 levels in the patient’s tissue samples with their clinical outcome, finding that those with higher levels of KISS1 expression exhibited slower disease progression and reduced chance of developing brain metastases. Interestingly, the relationship between brain metastasis and KISS1 expression was not correlated with previously established breast cancer subtypes that use the estrogen receptor, progesterone receptor, and HER2 gene as biomarkers. “KISS1 is an interesting protein that seems to at least play a role which subset of patients go on to develop brain metastases from breast cancer,” Lesniak said. “The beauty of this paper is that it carries across different subtypes of tumors.” However promising the data, the authors caution that their study is only the first step toward establishing KISS1 as a valid biomarker for predicting the course of metastatic breast cancer. Until the biological link between KISS1 expression and cancer progression can be determined, the relationship can’t be considered more than a correlation. But if a mechanism is discovered, Lesniak speculated that KISS1 may hold clues to a way to stop or slow brain metastases from occurring. “The question is how can you modulate KISS1 expression for the benefit of patients,” Lesniak said. “One approach would be to restore KISS1 expression in patients with advanced metastatic breast cancer, and see whether it makes the tumor less aggressive or less prone to metastatic disease. It’s an interesting thought, but it’s probably too premature to know whether that would hold true.” Regardless, the search for breast cancer biomarkers won’t settle for just one factor, be it KISS1, HER2, or other cellular proteins. The hope is that more and more reliable predictive biomarkers will be discovered, until a patient’s cancer can be tested and diagnosed in detail, pointing the way to effective and personalized treatment. “I think this shows what kind of studies we have to do to get better at predicting this process,” Pytel said. “At the moment, it’s only one marker, which is not where we want it to be. But it offers hope for a future where we could come up with panel of markers that would be helpful in predicting details about the progression of cancer in a patient.”
<urn:uuid:c323553f-aec5-4e8d-b8ef-f84037c8ac5d>
seed
There are many different types of breast cancer. The doctor who determines what type of breast cancer you have is called a "pathologist." The type of cancer you have is called your "diagnosis." It is very important that the pathologist gives you an accurate diagnosis. It is also important that you understand your diagnosis. Your treatment will be different depending on your diagnosis. Here's how doctors make a breast cancer diagnosis. A doctor takes a sample of your breast tissue. (This is called a "biopsy.") Then, the pathologist looks at the tissue sample. (The tissue samples that pathologists look at are also called "breast tissue slides.") The pathologist describes your cancer in a report. The report tells your "specific disease characteristics." Your disease characteristics tell what type of breast cancer you have. The disease characteristics help your doctors decide what treatment to recommend. Learning about your diagnosis helps you make informed care choices. The challenging part is that your diagnosis can be hard to understand. The good news is that all of your doctors should have the important information about your disease, but it may have to be gathered from a number of places. These may include: What You Can Do: Ask if there is more than one name for it. For example, "breast cancer," "invasive ductal carcinoma," and "infiltrating ductal carcinoma" can all mean the same thing. Is this the first time you have ever had breast cancer? If so, here are some important questions to ask your doctor: The answers to these questions will help you understand some of your disease characteristics. Be sure to ask what each disease characteristic means for you. You need this information to make informed treatment choices. Have you been diagnosed with breast cancer a second time? If so, you may want to ask your doctor these questions: Once again, the answers to these questions will help you figure out your treatment choices. The more information you have about your specific diagnosis, the more informed your treatment choice will be. They are important because doctors decide which treatments to recommend based on your diagnosis. Each woman with breast cancer has a different set of disease characteristics. These characteristics help doctors predict which women will most likely benefit from each treatment. And there are some drugs that only help women with one specific characteristic. Several different disease characteristics will be listed on your pathology report. The following four characteristics are the ones used most often by doctors to recommend treatment. You can learn more about these and other disease characteristics by reading Dr. Susan Love's Breast Book or visiting her web site. Lymph Node Status—Lymph nodes are small oval glands that help your body fight infection. They also help filter the fluid that circulates throughout the body, trapping bacteria, cancer cells, and other harmful substances. If a woman's breast cancer has spread to any of the lymph nodes near her breast or under her arm, her breast cancer is considered node-positive. If a woman's breast cancer has not spread to the lymph nodes, her breast cancer is considered node-negative. Women with node-negative breast cancer have a better chance of survival than women with node-positive breast cancer. So doctors often offer more aggressive treatments to women with node-positive breast cancer. For example, some doctors recommend stronger types of chemotherapy drugs to women with node-positive breast cancer than to women with node-negative breast cancer. Sometimes treatment recommendations are based on the number of lymph nodes that have been invaded by the cancer. For example, doctors will more likely recommend radiationtherapy after mastectomy for women with a greater number of positive lymph nodes. Herceptin® (trastuzumab) is a drug that can block HER2/neu, and is FDA-approved to treat HER2/neu positive node-negative or node-positive breast cancer. It is an effective treatment in many women with HER2/neu-positive breast cancer, but the drug has little effect on women with HER2/neu-negative breast cancer. Heralded as the first biologic for breast cancer and a major advance in targeted cancer therapies when first introduced, the drug has been included in breast cancer treatment in the U.S. since receiving approval for use by the FDA in 1998. Remember—Sometimes the more specific your diagnosis is, the more specific your treatment can be. It is important to use drugs that have been shown to help your type of breast cancer. And it is important not to use drugs that have not been shown to help your type of breast cancer, unless you are taking part in a clinical trial about the drug. That's because all cancer drugs have side effects, so you may be hurting your body more than helping it. It's important to learn about the risks and benefits of each treatment before making any decision about your care. Your pathology report has important information about your cancer. Ask your doctor if a breast pathologist wrote your pathology report. If not, you might want to ask if a breast pathologist is available to look at your breast tissue or if your breast tissue slides can be reviewed at a hospital where there is a breast pathologist. Your pathology report helps your oncologist and others understand what type of cancer you have. It also helps them predict what the cancer tumor will do. And it helps your doctors and you understand what treatments may help you. Ask your doctor to explain your specific disease characteristics to you. Dr. Susan Love's Breast Bookhas a helpful section called "How to Interpret a Biopsy Report." She also has this information on her web site. There are two kinds of second opinionsthat can help you. You should get both kinds of second opinions. Get a pathology second opinion before getting a treatment second opinion. A pathology second opinion can help you be sure that your diagnosis and disease characteristics are correct. This is very important, because doctors base their treatment advice on your pathology report. If your pathology report is wrong, you might get the wrong care. Every so often, it's difficult for pathologists to give a clear-cut diagnosis. So you may get conflicting pathology reports. In this case, it's especially important to learn as much as you can about your specific diagnosis. To get a pathology second opinion you must have your breast tissue slides sent to a second breast pathologist. You can arrange to have this done on your own. You do not need your doctor's OK to have a pathology second opinion. But you may have to pay for it yourself. This is what you need to do: Right now, researchers are looking for specific ways to identify different subtypes of breast cancer. They are also trying to find more targeted ways to treat specific types of breast cancer. This is a promising area of research. It holds the future of breast cancer treatment. Your breast tumor gives important information about your disease. This information may be important to your future care. It might help you later as new treatments and drugs come out. Your tissue also contains information that can help breast cancer researchers. This is why we think it is important that you ask that your breast tissue be stored properly and that you have access to it in the future. Ask your doctors these questions: 10. The information presented in this box is adapted from the National Cancer Institute's (NCI) Physician Data Query (PDQ) database. 11. The Breast International Group (BIG) 1-98 Collaborative Group. A comparison of letrozole and tamoxifen in postmenopausal women with early breast cancer. N Engl J Med 2005 Dec 29; 353(26): 2747-57. ATAC Trialists' Group. Results of the ATAC (Arimidex, Tamoxifen, Alone or in Combination) trial after completion of 5 years' adjuvant treatment for breast cancer. Lancet 2005 Jan 1-7; 365(9453): 60-62. 12. Pauletti G, Dandekar S, Rong H, et al. Assessment of methods for tissue-based detection of the HER-2/neu alteration in human breast cancer: a direct comparison of fluorescence in situ hybridization and immunohistochemistry. J Clin Oncol 2000 Nov 1; 18(21): 3651-64. Yaziji H, Goldstein LC, Barry TS, et al. HER-2 testing in breast cancer using parallel tissue-based methods. JAMA 2004 Apr 28; 291(16): 1972-7. Chorn N. Accurate identification of HER2-positive patients is essential for superior outcomes with trastuzumab therapy. Oncol Nurs Forum 2006 Mar; 33(2): 265-272. 13. Piccart-Gebhart MJ, Procter M, Leyland-Jones B, et al. Trastuzumab after adjuvant chemotherapy in HER2-positive breast cancer. N Engl J Med 2005 Oct 20; 353(16): 1659-72. Romond EH, Perez EA, Bryant J, et al. Trastuzumab plus adjuvant chemotherapy for operable HER2-positive breast cancer. N Engl J Med 2005 Oct 20; 353(16): 1673-84. Joensuu H, Kellokumpu-Lehtinen PL, Bono P, et al. Adjuvant docetaxel or vinorelbine with or without trastuzumab for breast cancer. N Engl J Med 2006 Feb 23; 354(8): 809-20. 14. See NBCC for analyses of the two articles on Herceptin use among women with early breast cancer by Romond, et al., and Joensuu, et al. See also a fact sheet on the early-stopping of clinical trials
<urn:uuid:af55ff18-9a3e-4023-842a-c7819728d229>
seed
Muscles and joints act in a complex fashion in order to apply the forces required for every day movement and activity. In athletic endeavors, even greater forces are applied for rapid movement, pushing, pulling, twisting, throwing and many other applications of muscles, tendons and ligaments around joint movement. Understanding the joints and how they are involved in weight training is important to understanding elements of good form in the gym and other applications of training the body for strength, muscle, and fitness. There are approximately 206 primary bones in the human body, depending on extra bones and which minor bones get counted. The trunk, including skull, face, spine, sternum and ribs has 80 bones, and the extremities of the shoulder, hands, arms and feet have 126 bones. Here are the main joints of the body. The Neck, Backbone and Vertebrae. form the spinal column. Each part of the spine and vertebrae has a distinct name. From the top: the cervical spine, the thoracic spine, and the lumbar spine. The backbone has a naturally curved shape called 'lordosis.' At the base of the spine is the sacrum, which connects with the pelvis, including the hips. The Shoulder. The main bones of the shoulder are the scapula -- the large bones in the center of either side of the back better known as the shoulder blades; and the clavicle, which is the collar bone that runs across either shoulder. The shoulder joint is the most complex joint, being capable of rotation as well as flexion and extension. Injuries to the shoulder joint are common, even in non-active people. The Pelvis and Hip. The pelvis includes the pelvic bone with the bony protuberances we generally call the hip bone (iliac crest). The sacrum at the bottom of the spine joins the pelvic bones on either side via the sacroiliac joints. These joints can be the source of poor back function and pain. The Elbow. The elbow joint allows the arm to flex -- as in 'flex your biceps.' The humerus bone at the top of the arm joins with the radius and the ulna of the forearm at the elbow joint. Tennis elbow, in which a tendon becomes inflamed, is a common overuse injury. The Wrist. The wrist is also susceptible to overuse injury. The bones of the forearm, the radius and ulna, join with a series of small bones, forming the wrist. These small bones are called the 'carpal bones.' An overuse injury called tenosynovitis is a common wrist injury, and this can occur in weight trainers who overtrain the wrist. The Knee. Knee injuries are very common in athletes and recreational exercisers who compete in sports that require twisting and turning -- team sports like basketball, football and hockey. Knee reconstructions can require a person to be absent from their activity for up to a year. The knee joint has a complex arrangement of articulating bones, ligaments and cartilage. Comparatively, weight training does not elicit a high number of serious knee injuries. The Ankle. Like the knee, the ankle joint is susceptible to twisting ligament sprains, sometimes involving broken bones such as the fibula. Also like the knee joint, twisting and turning activities at speed make the joint susceptible to injury. Weight Training and Joint Protection Compared to many other physical activities and sports, weight training is not especially hard on the joints of the body. The shoulder is probably the joint most injured, and this is likely to be a result of the complexity of the joint and the various loads and movement paths required in weight training for the upper body. The golden rule for joint injury protection in weight training, or any activities for that matter, are: - Practise good form. - Start light and work up to heavier weights progressively. - Know your limits and don't overload. - Know your limits and don't overwork. - Rest and recover from minor injuries. - Seek medical diagnosis and assistance with chronic or serious acute injuries.
<urn:uuid:dc469962-d4df-4e4c-834a-828f43052403>
seed
When patients undergo surgery, it’s inevitable that they will lose some blood, so surgical teams strive to replenish patients’ fluids over the course of an operation. But the most common technique to track blood volume—catheters inserted through the heart that provide a readout on a monitor—are invasive and not particularly accurate. According to Kirk H. Shelley, M.D., Ph.D., associate professor of anesthesiology, the flaws of catheter-based monitoring often engage operating room personnel in a delicate clinical balancing act with very high stakes. “Too little fluid can put a tremendous amount of stress on the kidneys, the cardiovascular system and the central nervous system. Organs need a certain amount of blood, and you’re risking a patient going into shock,” Shelley says. “But if you give too much fluid for the heart to pump, it backs up, causing bloating and pulmonary edema. Every day in the operating room, we try to find the right balance between these two extremes.” Now Shelley, who as chief of ambulatory surgery takes part in about 8,000 surgeries a year at Yale-New Haven Hospital, has found a possible solution to this daily surgical dilemma that’s already very close at hand—or more precisely, clipped to patients’ fingers—in hospitals around the world. By combining a clinical insight from the 1870s with data provided by the modern pulse oximeter, a clothespin-like clip placed on a fingertip, ear or toe to measure the oxygen level in the blood, Shelley has discovered a noninvasive, precisely quantified method to monitor blood loss and guide difficult decisions in the operating room. The pulse oximeter has become a common sight in hospitals since it was first introduced in the 1980s. The clips contain light-emitting diodes that shine both visible red and infrared light through the skin. Because deoxygenated hemoglobin allows infrared light to pass but absorbs red light, while oxygenated hemoglobin allows red light to pass and absorbs infrared, the oximeter can detect changes in the blood’s oxygen saturation by calculating the relative absorption of red and infrared light. Shelley, who changed specialties from internal medicine to anesthesiology in the late 1980s, began a residency in his new field just as pulse oximeters appeared on the scene. In those early days, Shelley discovered that oximetry clips generated exceedingly complex waveforms that were “cleaned up” by oximeter manufacturers in favor of clear, simple signals. But Shelley’s curiosity about the wealth of information produced by early oximeters—“One man’s artifact is another man’s signal,” he says—prompted him to devise software to sift through the raw oximetry signal for potentially valuable clinical information. In 1873 an observant German physician, Adolf Kussmaul, coined the term “pulsus paradoxus” for a phenomenon in which blood flow drops slightly after a deep breath, a dip caused when blood remains in the lungs and doesn’t reach the heart. Shelley discovered that pulsus paradoxus produced by the mechanical ventilation that accompanies general anesthesia could be detected in the raw oximetry waveform, and that this information could be combined with other data in the waveform to precisely manage fluid replacement in surgical patients. L. Alan Carr, Ph.D., a senior licensing associate in Yale’s Office of Cooperative Research who shepherded the discovery through a patent application, says that Shelley found treasure where others saw trash. “There’s all sorts of wild, raw data that comes off the pulse oximeter that companies have worked hard to eliminate, because it has been seen as just noise,” Carr says. “What’s ironic is that the background data actually had useful information in it.” As a member of an active research group headed by Professor of Anesthesiology David G. Silverman, M.D., which is devoted to noninvasive monitoring, Shelley is now adapting his method for use in non ventilated patients suffering from blood loss, such as trauma patients arriving at emergency departments. He plans to mine the pulse oximeter signal for more clinical riches, explaining that his affinity for noninvasive medical gadgetry stems from watching Star Trek’s Dr. McCoy in action. “McCoy would pass his devices over the patient and would know exactly what to do with the patient,” Shelley says. “I really think the newer generations of the pulse oximeter and the new information we’re going to get out of them are going to be like that. We’re going to continue stepwise, evolving this.”
<urn:uuid:abb46d24-70f6-4eb7-beb8-ffe42df80e35>
seed
In 2000, the Centers for Disease Control and Prevention began funding health departments to implement integrated electronic systems for disease surveillance. Determine the impact of discontinuing provider reporting for chronic hepatitis B and C, hepatitis A, and select enteric diseases. Laboratory and provider surveillance reports of chronic hepatitis B and C and enteric infections (Shiga toxin–producing Escherichia coli, Campylobacter, Listeria, noncholera Vibrio [eg, Vibrio parahaemolyticus], Salmonella, Shigella, and hepatitis A) diagnosed on January 1, 2007 to December 31, 2010 were compared for completeness and timeliness. Number of cases submitted by laboratories, providers, or both were assessed. From 2007 to 2010, the proportion of cases reported only by providers for enteric disease infections differed by disease, ranging from 4% (Shiga toxin–producing E coli) to 20% (noncholera Vibrio). For chronic hepatitis C, less than 1% of cases were reported by providers only. The number of complete laboratory reports increased over the time period from 80% to 95% for chronic hepatitis and 92% to 94% for enteric infections. Laboratory reports had higher completion for date of birth, sex, and zip codes. Provider reports had less than 60% completion for race/ethnicity versus 20% for laboratories. Laboratories were faster than providers at reporting chronic hepatitis B (median 4 vs 21 days), chronic hepatitis C (4 vs 18 days), Campylobacter (6 vs 10 days), noncholera Vibrio (11 vs 12 days), Salmonella (6 vs 7 days), Shigella (6 vs 13 days), and hepatitis A (3 vs 8 days); providers were faster than laboratories at reporting Shiga toxin–producing E coli (4 vs 7 days) and Listeria (5 vs 6 days). Laboratories reported more cases and their reports were timelier and more complete for all categories except race/ethnicity for chronic hepatitis, Campylobacter, noncholera Vibrio, Salmonella, Shigella, and hepatitis A. For chronic hepatitis, provider reporting could be eliminated in New York City with no adverse effects on disease surveillance. For enteric infections, more work is needed before discontinuing provider reporting.
<urn:uuid:80c085cf-768a-47cb-9d4a-96e1730f3571>
seed
Scientists have identified a gene in mice that plays a central role in the proper development of one of the nerve cells that goes bad in amyotrophic lateral sclerosis, or Lou Gehrig's disease, and some other diseases that affect our motor neurons. The study is the result of a collaboration by scientists at the University of Rochester Medical Center who normally focus on the eye, working together with a developmental neuroscientist at Harvard who focuses on the cerebral cortex. The work appears in the Oct. 23 issue of the journal Neuron. The work centers on corticospinal neurons, crucial nerve cells that connect the brain to the spinal cord. These neurons degenerate in patients with ALS, and their injury can play a central role in spinal cord injury as well. These are the longest nerves in the central nervous system – nerves sometimes several feet long that run from the brain to the spinal cord. As the ends of the nerves degenerate, patients lose the ability to control their muscles. The team led by Lin Gan, Ph.D., of Rochester and Jeffrey D. Macklis, M.D., D.HST, of Harvard showed that a protein known as Bhlhb5 is central to how the brain's progenitor cells ultimately become corticospinal motor neurons, one type of neuron that deteriorates in ALS. The same group of neurons also degenerates in patients with a rare neurological disease known as hereditary spastic paraplegia. The work by the Harvard and Rochester scientists marks an important step in scientists' understanding of how stem cells in the brain eventually grow into the extraordinary network of circuits that make up the human nervous system. Understanding how the body determines the destiny of stem and progenitor cells is crucial if physicians are to ultimately use the cells to create new treatments for motor neuron diseases like ALS and HSP, as well as other conditions such as Parkinson's and Huntington's diseases and spinal cord injury. Macklis' team is a world leader in discovering how the brain determines the destiny of its cells. The process is a bit like what happens on a construction site, where a foreman taps the expertise of a variety of workers – carpenters, plumbers, bricklayers, and so on – as needed to build a given structure. In the brain, teams of molecular signaling molecules are brought together to create nerve cells out of raw material where and when needed. Hundreds of such signaling molecules are brought together instantly and continually to allow the brain to create the nerve cells it needs for growth and development. "How does the brain take a broad class of neurons and decide which ones to send to the spinal cord, or which will connect to our visual centers?" said Macklis, who is director of the Center for Nervous System Repair at Massachusetts General Hospital and at Harvard. "We're looking at how the most sophisticated portion of the brain, the neocortex, creates the right kind of neurons where and when they're needed. Understanding how our brain circuits are initially built is the first step to repairing or reversing many diseases of the nervous system," added Macklis. The team showed that the molecular interactions that help control the destiny of the brain's progenitor cells can take place a bit later than some scientists have considered. The team found that Bhlhb5 plays an important role in determining the fate of progenitor cells that that have already exited the cell cycle and are well on their way to being refined into more precise types of cells. The team showed that when Bhlhb5 is knocked out in mice, cells that normally develop into neurons which connect the brain to the spinal cord don't do so. Those mice share many traits with people with hereditary spastic paraplegia, also known as familial spastic paralysis. Doctors estimate that approximately 10,000 to 20,000 Americans have some form of HSP. Symptoms vary widely, but generally patients have weakness or stiffness in their legs that often results in use of a walker or wheelchair. Most patients live full lives, but many experience a range of other difficulties, including blindness, skin problems, nerve damage in the fingers and toes, and deafness. Some patients are completely disabled, while others have little difficulty. No cure or treatment currently exists. A next step, Gan said, would be to analyze the function of the counterpart to the Bhlhb5 gene in patients. Scientists reported recently that the gene itself is not mutated in patients with HSP, but it's possible that the effect of the gene is somehow changed, perhaps by a different genetic mutation, in some patients with HSP. Already, more than 20 gene mutations are known to cause various forms of HSP, offering an array of targets to try to treat or cure the disease. "This is a perfect example illustrating why we study genetics in the mouse," said Gan, who is associate professor in the Department of Ophthalmology at the University of Rochester Eye Institute. "We've been able to pinpoint a gene that may play a role in a disease affecting thousands of people, and the work would have been impossible to do directly in people. We did the research in mice, and now we can go back to take a closer look in patients." Last year, the Rochester team showed that Bhlhb5 plays a role in determining what types of neurons are created in the eye. The eye is the usual focal point for Gan, who is director of the De Stephano Laboratory for Retinal Genomics at the University of Rochester Medical Center. His team studies the genes that play a role in creating the eye, keeping it healthy, and which might play a role in blinding eye diseases such as retinitis pigmentosa, macular degeneration, and glaucoma. The work includes two corresponding authors: Macklis, who is director of the Center for Nervous System Repair of Massachusetts General Hospital and Harvard Medical School, and Gan, who is also a researcher in the Department of Neurobiology and Anatomy, the Center for Neural Development and Disease, and the Center for Visual Science at Rochester. The first author is Pushkar S. Joshi, Ph.D., Gan's former graduate student at Rochester, who is now a researcher at Stanford. Other authors include Bradley J. Molyneaux, M.D., Ph.D., formerly Macklis' graduate student at Harvard, who is now a neurology resident there; former Rochester research Liang Feng, Ph.D., now at Northwestern; and Rochester technician Xiaoling Xie. The work was supported by the National Eye Institute, the National Institute of Neurological Disorders and Stroke, Research to Prevent Blindness, Harvard Stem Cell Institute, the Spastic Paraplegia Foundation, and the ALS Assn. Cite This Page:
<urn:uuid:57aa9a16-f27b-49a7-98da-a66b1672cfd7>
seed
In a development that sheds new light on the pathology of Alzheimer's disease (AD), a team of Whitehead Institute scientists has identified connections between genetic risk factors for the disease and the effects of a peptide toxic to nerve cells in the brains of AD patients. The scientists, working in and in collaboration with the lab of Whitehead Member Susan Lindquist, established these previously unknown links in an unexpected way. They used a very simple cell type -- yeast cells -- to investigate the harmful effects of amyloid beta (Aβ), a peptide whose accumulation in amyloid plaques is a hallmark of AD. This new yeast model of Aβ toxicity, which they further validated in the worm C. elegans and in rat neurons, enables researchers to identify and test potential genetic modifiers of this toxicity. "As we tackle other diseases and extend our lifetimes, Alzheimer's and related diseases will be the most devastating personal challenge for our families and one the most crushing burdens on our economy," says Lindquist, who is also a professor of biology at Massachusetts Institute of Technology and an investigator of the Howard Hughes Medical Institute. "We have to try new approaches and find out-of the-box solutions." In a multi-step process, reported in the journal Science, the researchers were able to introduce the form of Aβ most closely associated with AD into yeast in a manner that mimics its presence in human cells. The resulting toxicity in yeast reflects aspects of the mechanism by which this protein damages neurons. This became clear when a screen of the yeast genome for genes that affect Aβ toxicity identified a dozen genes that have clear human homologs, including several that have previously been linked to AD risk by genome-wide association studies (GWAS) but with no known mechanistic connection. With these genetic candidates in hand, the team set out to answer two key questions: Would the genes identified in yeast actually affect Aβ toxicity in neurons? And if so, how? To address the first issue, in a collaboration with Guy Caldwell's lab at the University of Alabama, researchers created lines of C. elegans worms expressing the toxic form of Aβ specifically in a subset of neurons particularly vulnerable in AD. This resulted in an age-dependent loss of these neurons. Introducing the genes identified in the yeast that suppressed Aβ toxicity into the worms counteracted this toxicity. One of these modifiers is the homolog of PICALM, one of the most highly validated human AD risk factors. To address whether PICALM could also suppress Aβ toxicity in mammalian neurons, the group exposed cultured rat neurons to toxic Aβ species. Expressing PICALM in these neurons increased their survival. The question of how these AD risk genes were actually impacting Aβ toxicity in neurons remained. The researchers had noted that many of the genes were associated with a key cellular protein-trafficking process known as endocytosis. This is the pathway that nerve cells use to move around the vital signaling molecules with which they connect circuits in the brain. They theorized that perhaps Aβ was doing its damage by disrupting this process. Returning to yeast, they discovered that, in fact, the trafficking of signaling molecules in yeast was adversely affected by Aβ. Here again, introducing genes identified as suppressors of Aβ toxicity helped restore proper functioning. Much remains to be learned, but the work provides a new and promising avenue to explore the mechanisms of genes identified in studies of disease susceptibility. "We now have the sequencing power to detect all these important disease risk alleles, but that doesn't tell us what they're actually doing, how they lead to disease," says Sebastian Treusch, a former graduate student in the Lindquist lab and now a postdoctoral research associate at Princeton University. Jessica Goodman, a postdoctoral fellow in the Lindquist lab, says the yeast model provides a link between genetic data and efforts to understand AD from the biochemical and neurological perspectives. "Our yeast model bridges the gap between these two fields," Goodman adds. "It enables us to figure out the mechanisms of these risk factors which were previously unknown." Members of the Lindquist lab intend to fully exploit the yeast model, using it to identify novel AD risk genes, perhaps in a first step to determining if identified genes have mutations in AD patient samples. The work will undoubtedly take the lab into uncharted territory. Notes staff scientist Kent Matlack: "We know that Aβ is toxic, and so far, the majority of efforts in the area of Aβ have been focused on ways to prevent it from forming in the first place. But we need to look at everything, including ways to reduce or prevent its toxicity. That's the focus of the model. Any genes that we find that we can connect to humans will go into an area of research that has been less explored so far." This work was supported by an HHMI Collaborative Innovation Award, an NRSA fellowship, the Cure Alzheimer's Fund, the National Institutes of Health, the Kempe foundation, and Alzheimerfonden. - Sebastian Treusch, Shusei Hamamichi, Jessica L. Goodman, Kent E. S. Matlack, Chee Yeun Chung, Valeriya Baru, Joshua M. Shulman, Antonio Parrado, Brooke J. Bevis, Julie S. Valastyan, Haesun Han, Malin Lindhagen-Persson, Eric M. Reiman, Denis A. Evans, David A. Bennett, Anders Olofsson, Philip L. Dejager, Rudolph E. Tanzi, Kim A. Caldwell, Guy A. Caldwell, Susan Lindquist. Functional Links Between Aβ Toxicity, Endocytic Trafficking, and Alzheimer’s Disease Risk Factors in Yeast. Science, 2011; DOI: 10.1126/science.1213210 Cite This Page:
<urn:uuid:10e46469-0fa7-4715-84be-7fe0e81e48cd>
seed
Exploratory Study of Effects of Radiation Therapy in Pediatric Patients With Central Nervous System Tumors This study will analyze the effects of radiation given to children who have tumors of the central nervous system (CNS). Researchers want to learn more about changes in the quality of life that patients may experience as a result of radiation. Patients ages 21 and younger who have a primary CNS tumor and who have not received radiation previously may be eligible for this study. They will have a medical history and physical examination. Collection of blood (about 2-1/2 tablespoons) and urine will be done, as well as a pregnancy test. Patients will complete neuropsychological tests, which provide information about their changes in functioning over time. An expert in psychology will give a number of tests, and the patient's parents or guardian will be asked to complete a questionnaire about the patient's behavior. Also, patients will be given a quality of life questionnaire to complete and vision and hearing tests. The radiation itself is prescribed by patients' doctors and is not part of this study. Magnetic resonance imaging (MRI) will give researchers information about the tumor and brain, through several scanning sequences . MRI uses a strong magnetic field and radio waves to obtain images of body organs and tissues. Patients will lie on a table that slides into the enclosed tunnel of the scanner. They will need to lie still, and medication may be given to help them to do that. They may be in the scanner for up to 2 hours. As the scanner takes pictures, patients will hear knocking or beeping sounds, and they will wear earplugs to reduce the noise. A contrast agent will be administered, to allow images be seen more clearly. Blood and urine tests will be conducted after the first dose of radiation. MRI scans will be done 2 weeks after patients finish radiation therapy and again at 6 to 8 weeks, 6 months, 12 months, and yearly. Also at those follow-up periods, patients will undergo similar procedures as previously, including blood and urine tests and neuropsychological testing. Patients can remain in this study for 5 years. Diffuse Intrinsic Pontine Glioma |Study Design:||Time Perspective: Prospective| |Official Title:||An Exploratory Study of Biologic and Pathophysiologic Effects of Radiation Therapy in Pediatric Patients With Central Nervous System Tumors| - Measure changes in antiogenesis, blood brain barrier permability and neurotoxicity re to radiation of the CNS [ Time Frame: before and up to 8 years after ] [ Designated as safety issue: No ] - Describe changes in imaging, endocrine function after xrt to the brain [ Time Frame: before and up to 8 years after ] [ Designated as safety issue: Yes ] - Monitor changes in serum proteome and germline polymorphisms [ Time Frame: before and up to 8 years after ] [ Designated as safety issue: No ] |Study Start Date:||July 2006| This exploratory study will be performed in pediatric patients with CNS tumors who are undergoing radiation therapy to investigate pathophysiologic effects of radiation on the CNS. The study includes the analysis of blood, urine, and CSF (if available) to measure biological markers involved with angiogenesis, blood: brain barrier integrity, and neurotoxicity. It also entails comprehensive MR imaging techniques and neuropsychological testing in an effort to correlate changes with biomarker measurements. To detect changes in angiogenesis related to radiation of the CNS by: - Measurement of VEGF, bFGF, thrombospondin, TNF-alpha, IL-12, IL-8, and MMP in blood and urine specimens. - MR perfusion and DEMRI. - To describe changes in blood:brain barrier permeability associated with radiation of the CNS. To characterize neurotoxicity by: - Measuring biomarkers associated with neurotoxicity, including NF-1, NSE, S100-Beta, GFAP, and quinolinic acid in blood and CSF. - Documenting changes in neurobehavioral functioning through longitudinal comprehensive assessments - Describing changes in quality of life (QOL) - Assessing changes in memory - Defining changes in ophthalmologic studies associated with radiation. - Detecting changes in audiometry associated with radiation. - Patients must have a primary CNS tumor for which radiation therapy is recommended. - Patients must be less than or equal to 21 yrs of age. - Prior/Concurrent: Patients will be eligible if they have not received prior radiation. Patients who have undergone surgery or received chemotherapy are eligible. - Performance Status: Patients will be eligible regardless of performance score. This minimally invasive study is designed to explore various biologic effects of radiation on the pediatric CNS in an attempt to 1) obtain information on the pathophysiology of radiation-induced damage, 2) explore the association of neuropsychological deficits with biologic markers and neuroimaging abnormalities, 3) document changes in neurobehavioral functioning through longitudinal comprehensive neuropsychological assessments with comparison of various radiation therapy techniques, 4) describe changes in quality of life in pediatric patients who have received radiation therapy, and 5) attempt to identify children at increased risk of radiation-induced neurotoxicity. Please refer to this study by its ClinicalTrials.gov identifier: NCT01445288 |Contact: Katherine E Warren, M.D.||(301) [email protected]| |United States, District of Columbia| |Childrens National Medical Center||Recruiting| |Washington, District of Columbia, United States| |United States, Maryland| |National Institutes of Health Clinical Center, 9000 Rockville Pike||Recruiting| |Bethesda, Maryland, United States, 20892| |Contact: For more information at the NIH Clinical Center contact National Cancer Institute Referral Office (888) NCI-1937| |Gifu University School of Medicine||Recruiting| |Principal Investigator:||Katherine E Warren, M.D.||National Cancer Institute (NCI)|
<urn:uuid:159dab0e-9cc5-497a-b135-8dd8cf1e6b06>
seed
By Travis Jobe, Senior Specialist, Laboratory Systems & Standards - Vaccine-Preventable Diseases, APHL So where do the public health labs enter the story on measles? One of the key components for identifying measles cases is laboratory diagnostics. This can be especially important when immunized persons become sick with measles but don’t present with typical symptoms due to their partial immunity. However, the limited number of assays available to public health laboratories for measles diagnostics limits their abilities to detect the disease. Also, because a laboratory rarely gets a chance to run the test on real cases, it may be hard to be confident in any given assay’s performance. The APHL Recovery Act-funded Vaccine-Preventable Diseases (VPD) project has attempted to confront the question of performance of measles serology assays by offering public health laboratories the opportunity to participate in a pilot proficiency testing-like exercise. Anyone who hasn’t worked in a diagnostic laboratory probably doesn’t know what proficiency testing, or PT, is. But, essentially, PT is a panel of positive and negative samples sent to a laboratory to test and see if they get the correct results. PT is a vital part of the work that public health laboratories perform to assure that they are competent in the results they report – results that impact the treatment of people like you and me. Thirty-five public health laboratories from 30 different states participated in this measles serology PT exercise. As it turns out, the results did point out some variances in the assays public health laboratories use for measles serology testing. It just goes to show that not all assays may work perfectly for laboratory diagnosis, which shouldn’t be news to anyone in the medical or public health field. But as long as the laboratorians themselves know about and understand these limitations, the proper interpretation of laboratory results can be communicated to the epidemiologists and doctors who are confronting the measles cases – real patients – directly. Travelers with measles continue to return to the US this year from Europe and other areas of the world with ongoing measles outbreaks. The laboratories that participated in the PT exercise are already putting to use this increased knowledge of their assays’ performance as they are called upon to perform the testing for these cases. This includes the public health laboratories of Minnesota, California, New York, Pennsylvania, Washington, Texas, Florida, New Mexico, and New Jersey – just to name a few of the states that have seen imported measles this year. For the general public, not only does the public health response prevent further illness by limiting disease transmission, it also helps to save further expenses – real money. In the US, the cost to society is estimated to be well over $100,000 per measles case! This includes not just the cost of treating the sick patient but of protecting the public from further spread of disease. This doesn’t even include the cost to businesses for parents’ time off to attend to sick children and other private-sector expenses. Nor does it include the costs of raising and supporting individuals permanently injured by the disease. By immunizing the population and responding quickly to imported measles cases, further costs – and potentially deadly or debilitating illness – can be prevented. Laboratories are but one link in the chain of the public health response. Confidence in the performance of laboratory assays is a key component of that link. At the same time, not all public health laboratories can be expected to perform all tests for measles – or any other disease for that matter. No laboratory has unlimited resources, and public health labs are faced with limitations coming from federal, state, and local budget constraints. So, in the public health laboratory community, we need to work together to support each other where needed. Of course, the US Centers for Diseases Control and Prevention (CDC) have historically provided the extra support and reference testing services that public health laboratories rely on. But with current fiscal cutbacks, laboratories cannot always count on that to be the case. To discuss these issues is more depth, with a specific focus on measles testing included, APHL’s VPD project hosted a meeting of experts in March to discuss various national laboratory capacity models that may help confront the challenges of VPD testing. The meeting participants noted that currently many public health laboratories utilize a shared-services model for mutual support; but they also acknowledged the lack of sustainability of this type of model. Having a national laboratory capacity model where each public health laboratory can maintain the baseline testing capability that they desire, while having the ability to access additional non-CDC reference services for complex testing and surge capacity, was seen by the meeting participants to be an acceptable, if not urgently needed, solution. The existence of public health laboratory resource centers for this type of support for reference testing, additional PT programs, and subject matter expertise, are seen as universal needs for all types of VPD testing. This type of collaborative national effort to confront existing VPD testing challenges can help public health laboratories support each other and maintain a level of preparedness that will allow them to be ready to respond to many emerging public health threats, such as importation of measles. The World Health Organization has targeted measles for worldwide elimination. For this disease to begin a comeback in the US due to a lack of relatively small investments would be an embarrassment and a national shame. Elimination of measles is not a goal of just some poor countries overseas where the disease is still endemic. We in the public health system in the US are all part of this effort as well, whether we realize it every day or not. Now is not the time to be complacent by cutting back funding for prevention and control efforts, because to do otherwise would risk catastrophic situations and massive inputs of resources. Of course, by resources, I mean money – money that state governments currently cannot afford to spend on preventable situations. APHL and the VPD project are doing their part within their role to maintain vigilance against measles. I hope that my efforts in this project help make our roles be successful. Now, I don’t expect my grandmother from Arkansas, or the street vendors I encountered in Livingstone, Zambia, to understand all the work that I do calculating standard deviations for PT results, reviewing laboratory testing algorithms, or analyzing the factors of various laboratory capacity models for public health laboratory resource centers. But I hope they do understand the value that these efforts have on society as a whole, helping to keep us all healthy – and happily ignorant of this devastating disease. As I said, I have never seen a case of measles. With the continued great efforts of all my colleagues in the public health field, I hope I never do.
<urn:uuid:7209f8d8-8b86-4f3f-af30-4b3ca77a5a33>
seed
How Doctors Think Jerome Groopman, M.D. Boston: Houghton Mifflin Company, 2007, 307pp., $34.95. Groopmanís book examines the complex thinking processes that enter into determining a patient-centered approach to a medical diagnosis. Some diagnoses are easy to procure while others take significant time and effort to achieve. Although clinical algorithms and statistics can be useful for a run-of-the-mill diagnosis and treatment, they quickly fall apart when a physician needs to think outside of the box because sometimes a patientís symptoms may be vague, multiple or confusing. In such cases, physicians are discouraged from thinking independently and creatively. Instead of expanding a physicianís thinking in complex diagnostic situations, such algorithms and statistics actually constrain it. Despite this, a movement is in place in medical practice to base all treatment decisions on strictly statistically proven data. This is quickly becoming a protocol in many hospitals. The reliance on evidence-based medicine risks having the physician choose care solely by numbers. Physicians must remember that statistics usually embody averages but not individuals with a particular ailment. Errors in thinking can frequently occur when a physician relies totally on statistics to make a diagnosis. There are ways that a physician can think so that (s)he can reduce the frequency and severity of clinical mistakes in judgment. The goal of the book is to show physicians how they usually think in order to determine how they can improve their thinking skills so that patient misdiagnoses can be reduced. Groopman argues for the importance of physician intuition in the diagnosis of an illness. Clinical intuition is a complex feeling that becomes refined through years of listening to patientís stories, examining thousands of patients and, most importantly, remembering when the physician was wrong in diagnosing a patient. Expertise is acquired not only by sustained practice but also by receiving feedback from the patient so that the physician can understand the technical errors and misguided decisions which occurred. Sometimes a physician frames the patientís information wrongly by relying on useful shorthands. Physicians must not frame the diagnosis as given. A superior physician is sensitive to the patientís language and emotion while (s)he is disclosing his/her story of illness since language is the bedrock of clinical practice. Sometimes a physicianís emotions may be skewed because of the emotionality that is inherent in a patientís medical situation. This can cause errors in judgment and can lead to serious misdiagnoses if a physician does not exercise extreme care. During these times, the physician must become aware of his/her thinking patterns and to recast the information that the patient initially disclosed by asking the patient a few further questions about his/her symptoms. The questions that the physician chooses to ask and how (s)he asks them will shape the patientís answers and guide his/her thinking. When a physician offers a quick diagnosis without properly reflecting on the patientís unique medical situation, (s)he is prone to cognitive biases, such as anchoring and availability. Anchoring is a shortcut in thinking in which a physician does not consider multiple possibilities but quickly and firmly latches onto a single one, certain that he has thrown his anchor down just where (s)he needs to be. Availability is the tendency to judge the likelihood of an event by the ease with which relevant examples come to mind. Because the physician is prone to these two cognitive errors, (s)he should pay close attention to how a patientís diagnosis is shaped and determined. To provide quality care, the physician must strive to think broadly, making judicious decisions with limited data, neither overreacting nor being blasť about a patientís medical situation, and using words with precision and an appreciation of the patientís social context. In other words, quality of care requires that the physician should become the patientís gatekeeper by knowing where and when to guide him/her. When a negative diagnosis has to be delivered, the patient should be guided, provided a balance, raise doubts, and highlight uncertainty (if necessary). In other words, the physician should think with the patient so that a mutual understanding between the patient and physician is achieved about the therapy, its rationale, and the specifics of the treatment. In conclusion, Groopmanís approach is commonsensical and intuitive, highlighting the importance of forming a physician-patient partnership. Humane medical care cannot be achieved without ensuring that the physician communicate openly and honestly without biases. Groopmanís book delicately balances the theoretical and practical aspects of medical care in the new millennia, along with its complexities and difficulties. This book is recommended for general practitioners and specialists in all fields of medicine. The book may also be helpful for a lay person who is struggling to form a partnership with his/her physician. Medical care has grown in complexity over the past twenty five to thirty years due to technological advancements and the physicianís constant time constraints. After thirty years of practicing as a physician, Groopman recognizes the benefits of treating the patient as a partner to improve his thinking, and to protect him from cognitive pitfalls. He has also learned to open his mind to understanding his patientís problems and emotional needs. There is no better way to care for those who need to be cared for. Irene S. Switankowsky, University of Wales, Lampeter
<urn:uuid:850de1b9-a385-4645-bcc7-2b7fa8c36c44>
seed
A new finding provides fresh hope for the millions of women worldwide with oestrogen receptor positive breast cancer. Australian scientists have shown that a specific change, which occurs when tumours become resistant to anti-oestrogen therapy, might make the cancers susceptible to treatment with chemotherapy drugs. Seventy percent of breast cancer patients have oestrogen receptor positive cancer, and most patients respond well to anti-oestrogen therapies, for a few years at least. Within 15 years, however, 50% will relapse and eventually die from the disease. Dr Andrew Stone, Professor Susan Clark and Professor Liz Musgrove, from Sydney's Garvan Institute of Medical Research, in collaboration with scientists from Cardiff University, have demonstrated that the BCL-2 gene becomes epigenetically 'silenced' in resistant tumours. This process is potentially detectable in the blood, providing a diagnostic marker. Their findings are now online in the international journal Molecular Cancer Therapetics. Epigenetics involves biochemical changes in our cells that directly impact our DNA, making some genes active, while silencing others. Epigenetic events include DNA methylation, when a methyl group - one carbon atom and three hydrogen atoms - attaches to a gene, determining the extent to which it is 'switched on' or 'switched off'. Dr Stone and colleagues have shown in human disease, as well as in several different cell models, that BCL-2 is silenced in oestrogen-resistant tumours by DNA methylation. "The main purpose of the BCL-2 gene is to keep cells alive, so when the gene is silenced, cells become more vulnerable to chemotherapy," said Dr Stone. "The next step will be to test our findings in clinical studies. We propose that if the BCL-2 gene is silenced, patients with oestrogen receptor positive breast cancer would benefit from combination therapy. In other words, tamoxifen could be used in combination with a chemotherapy drug, to kill off vulnerable tumour cells." "Excitingly, this is something that could be implemented into clinical practice very quickly, since the technology now exists to profile methylation of BCL-2 in all patients – both oestrogen responsive and oestrogen resistant patients. In addition, the proposed chemotherapy drugs are already in use." "If such a test were to be implemented, we believe it could help patients much earlier – hopefully shutting down tumours at an early stage."
<urn:uuid:40ed2fcc-1eef-4a95-a3da-95b34ee602c4>
seed
Diet quality is positively associated with 100% fruit juice consumption in children and adults in the United States: NHANES 2003-2006 - Equal contributors 1 School of Human Ecology, Louisiana State University Agricultural Center, 261 Knapp Hall, Baton Rouge, LA 70803, USA 2 USDA/ARS Children's Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, 1100 Bates Avenue, Houston, TX 77030, USA 3 Nutrition Impact, LLC, 9725 D Drive North, Battle Creek, MI 49014, USA Nutrition Journal 2011, 10:17 doi:10.1186/1475-2891-10-17Published: 13 February 2011 One hundred percent fruit juice (100% FJ) has been viewed by some as a sweetened beverage with concerns about its effect on weight. Little regard has been given to the contribution of 100% FJ to diet quality. In this study data from the 2003-2006 National Health and Nutrition Examination Survey were used to examine the association of 100% FJ consumption with diet quality in participants 2-5 years of age (y) (n = 1665), 6-12 y (n = 2446), 13-18 y (n = 3139), and 19+y (n = 8861). Two 24-hour dietary recalls were used to determine usual intake using the National Cancer Institute method. Usual intake, standard errors, and regression analyses (juice independent variable and Healthy Eating Index-2005 [HEI-2005] components were dependent variables), using appropriate covariates, were determined using sample weights. The percentage of participants 2-5 y, 6-12 y, 13-18 y, and 19+y that consumed 100% FJ was 71%, 57%, 45%, and 62%, respectively. Usual intake of 100% FJ (ounce [oz]/day) among the four age groups was: 5.8 ± 0.6, 2.6 ± 0.4, 3.7 ± 0.4, and 2.4 ± 0.2 for those in age groups 2-5 y, 6-12 y, 13-18 y, and 19+y, respectively. Consumption of 100% FJ was associated with higher energy intake in 6-12 y, 13-18 y, and 19+y; and higher total, saturated, and discretionary fats in 13-18 y participants. Consumption of 100% FJ was associated with higher total HEI-2005 scores in all age groups (< 0.0001). In 100% FJ consumers, total and whole fruit consumption was higher and intake of added sugars was lower in all age groups. Usual intake of 100% FJ consumption exceeded MyPyramid recommendations for children 2-5 y, but was associated with better diet quality in all age groups and should be encouraged in moderation as part of a healthy diet.
<urn:uuid:328a27fa-6396-4dea-accb-17cae6e5ab2d>
seed
Severe acute respiratory syndrome is a viral infection that causes flu-like symptoms. Severe acute respiratory syndrome (SARS) was first detected in China in late 2002. A worldwide outbreak occurred, resulting in almost 8,500 cases in 29 countries, including Canada and the United States, by mid 2003. As of mid 2006, no cases had been reported worldwide since 2004. Symptoms and Diagnosis Symptoms begin about 2 to 10 days after contact with the virus. The first symptoms resemble those of other more common infections and include fever, headache, chills, and muscle aches. Runny nose and sore throat are unusual. About 3 to 7 days later, a dry cough and difficulty breathing may develop. Most people recover within 1 to 2 weeks. However, about 10 to 20% develop severe difficulty breathing, resulting in insufficient oxygen in the blood. About half of these people need assistance with breathing. However, few people in the United States have had symptoms this severe. About 10% of infected people die. Death is due to extreme difficulty breathing. SARS is suspected only if people who may have been exposed to an infected person have a fever plus a cough or difficulty breathing. If a doctor suspects SARS, a chest x-ray is usually taken. The doctor may take a swab of secretions from the person's nose and throat to try to identify the virus. A sample of sputum may also be examined. Prevention and Treatment Travel advice from the Centers for Disease Control and Prevention (CDC) should be heeded. Wearing a mask is not recommended except for people who are in close contact with someone who may have SARS. People exposed to someone who may have SARS (such as family members, airline personnel, or health care workers) should be alert for symptoms of the infection. If they have no symptoms, they may attend work, school, and other activities as usual. If they develop fever, headache, chills, muscle aches, cough, or difficulty breathing, they should avoid face-to-face contact with other people and see a doctor. If doctors think a person may have SARS, the person is isolated in a room with a ventilation system that limits the spread of microorganisms in the air. Doctors may try treating SARS with antiviral drugs, such as oseltamivir and ribavirin, and corticosteroids. However, there is no evidence that these or any other drugs are effective. The virus eventually disappears. People with mild symptoms need no specific treatment. Those with moderate difficulty breathing may need to be given oxygen through plastic nasal prongs or a face mask. Those with severe difficulty breathing may need mechanical ventilation to aid breathing. Last full review/revision November 2009 by Marguerite A. Urban, MD
<urn:uuid:b41b9875-94c3-4a3d-a93c-c468fa084a8b>
seed
Studies converge on ALS Motor neurons deteriorate during the course of ALS. Researchers from the Broad, Harvard Stem Cell Institute, and Boston Children’s Hospital used stem cell, RNA-sequencing, and genome-editing technologies to investigate why these particular neurons are This graphic of a motor neuron was cropped from an original image by John Wildgoose, Wellcome Images under the Creative Commons BY-NC-ND 4.0 license. What: Researchers from the Broad Institute, the Harvard Stem Cell Institute (HSCI), and Boston Children’s Hospital (BCH) used an eclectic combination of cutting-edge technologies to determine what’s going wrong at the molecular level in the neurodegenerative disease amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease. Their research, published online this week in two separate Cell journals, sheds light on the mechanisms that lead to the disease and highlights potential targets for new treatments. ALS is characterized by the degeneration of motor neurons, leading to a progressive and fatal loss of muscle control. While the disease has previously been linked to various mutations in over two dozen genes, it hasn’t been clear how these mutations lead to cellular degeneration, or why motor neurons are selectively affected. One of the current studies, led by Broad associate member Kevin Eggan and first authors Evangelos Kiskinis and Jackson Sandoe, used a combination of stem cell, RNA-sequencing, and genome-editing technologies to investigate this medical mystery. By creating a stem cell line of motor neurons derived from ALS patients with known genetic mutations, they were then able to observe how those cells functioned – both during the course of degeneration and when the mutations were corrected. That comparison allowed them to identify mechanisms and genes that could be targets for potential therapies. Eggan’s team also worked on the second paper, along with co-senior author Clifford Woolff and co-first author Brian Wainger of BCH and HSCI. That study used stem cell lines to confirm that motor neurons from ALS patients are “hyperexcitable” – firing too easily or too often – making the neurons vulnerable and prone to cell death. By introducing a drug that blocked the hyperactivity, they were able to counteract motor neuron degeneration in the patient-derived cells. Who: Researchers from the Broad’s Stanley Center for Psychiatric Research, the Harvard Stem Cell Institute, and Boston Children’s Hospital conducted the study in partnership with scientists from six other institutions. In addition to Eggan, Kiskinis, and Sandoe, other Broad researchers involved include Steve McCarroll, Luis Williams, Rob Moccia, Steve Han, Ole Wiskow, Theodore Peng, Shravani Mikkilineni, Florian Merkle, Brandi Davis-Dusenbery, Michael Ziller, Justin Ichida, Nick Atwater, James Nemesh, and Bob Handsaker. Why: The Broad’s Stanley Center for Psychiatric Research conducts research, on its own and in collaboration with partnering institutions, aimed at rooting out the genetic and molecular causes of neurological diseases, including ALS. The center’s ultimate goal is to translate those findings into promising new therapeutics. “Emerging technologies are now enabling us to probe the biology of disease and test candidate drugs in a context that effectively mimics the cellular systems of patients,” explained Eggan, who is also a principal investigator at HSCI and a Howard Hughes Medical Institute Early Career Scientist. “By combining these approaches, we were able to identify a series of pathological mechanisms and possible drug targets in ALS – findings that we hope in time will benefit patients.” For more: Visit the Stanley Center for Psychiatric Research on the web to learn about more Broad research on neurological disease.
<urn:uuid:f0b136e4-70ea-4113-ac99-642ebcb195f2>
seed
We've heard the sobering statistics: More than 65 percent of American adults are overweight or obese. Some 17 percent of children and adolescents ages 2 through 19 are overweight. And nearly 1 in 5 children ages 6 through 11 struggles with weight issues. The weight epidemic costs countless individuals their health and wellness, resulting in a public-health crisis and spawning a several-billion-dollar weight-loss industry. Clearly, many people need to lose weight to preserve their health. Fortunately, they have a number of options for doing it safely and effectively. This discussion will explain how to set and meet healthy goals for weight that can be maintained over the long term. Research data have shown it time and again: Keeping our weight in check is vital to our overall health and wellness. Overweight people who lose weight—and those who maintain a healthy weight—may reduce their risk of developing many serious medical conditions, including: A balanced and health-promoting lifestyle, which should include moderate physical activity, a balanced and healthful diet, adequate sleep, and effective stress management, can improve these conditions, lead to weight loss, and improve one's overall quality of life. Because weight figures so prominently in our health, wellness, and overall quality of life, it's critical that people do what it takes to manage it. The good news is that people who are overweight can gain significant health benefits by losing only 5 to 10 percent of their total weight. Because healthy people come in all shapes, sizes, and weights, many weight-management experts now avoid using the term ideal weight—a concept derived from the height-weight tables once used by insurance actuaries—and instead focus on body mass index and waist circumference (or waist-hip ratio). These measurements better represent the degree and/or location of body fat, which is most important in determining health risk. While the number on the scale can provide a general indication of whether a person's weight falls within a range that is healthy for him or her, the body mass index offers a more revealing look at someone's overall risk of disease and is widely accepted by all major health organizations for the classification of overweight and obesity. BMI is typically calculated in one of two ways: Population data indicate that a BMI between 18.5 and 24.9 is considered healthy for most people, while those with BMIs between 25 and 29.9 are considered overweight and therefore at increased risk for developing weight-related illnesses. In general, the higher the BMI, the greater the risk. People with BMIs of 30 or greater are considered to be obese, a category that is divided into three classes based on BMI: Because the accumulation of visceral fat (internal fat that collects around the organs and midsection) is thought to be associated with higher health risks than fat located elsewhere in the body, waist circumference is another good way to assess weight-related health risks. In general, men whose waists measure more than 40 inches and women with waist circumferences larger than 35 inches are at greater risk for many serious medical conditions than people who are smaller around the middle. A healthcare provider, fitness consultant, or dietitian can help you properly determine your BMI and waist circumference, or you can calculate it yourself with the Centers for Disease Control and Prevention's BMI calculator. When we cut through all the noise about weight loss, the bottom line is always the same: To lose weight, we must burn more calories than we take in. To keep it off, we must adopt permanent lifestyle changes. There are no miracle diets, surgeries, gadgets, pills, or potions that can change that physiological fact. The first half of the weight-loss formula—burning calories—requires us to expend energy. Experts recommend an increase in physical activity for anyone trying to reach a healthy weight, including bariatric surgery patients. Physical activity burns calories, helps to sustain weight loss, improves many weight-related health issues, and improves overall well-being. People attempting to lose weight or maintain weight loss should work toward engaging in 30 to 60 minutes of moderate physical activity most days of the week. A brisk walk is often just what the doctor ordered, but there are countless ways to get a comparable workout—no gym membership necessary. Click here to learn more about the health benefits of physical fitness. The key to avoiding the lose-gain cycle also known as yo-yo dieting is to approach weight control not as a "diet" but as a change in lifestyle. Gradually add more physical activity and build to the recommended amounts. Gradually improve nutrition habits by selecting healthful foods, eating a balanced diet and reducing portions. Think about aspects of your life that tend to trip you up in your quest for better health. Do you manage stress with food? Do you have a good support system? Do you have clear and achievable goals? These are good questions to ask ourselves when embarking on any lifestyle change. If you have fallen victim to fad diets in the past, it is important to note that despite a popular belief to the contrary, there is no scientific evidence that yo-yo dieting causes permanent health or metabolism damage. Most people's bodies recover quickly when they return to healthful eating and activity levels. Experts offer the following general suggestions to people striving to manage their weight: Eating and drinking The mental component * Journal of the American Medical Association, March 12, 2008 Popular weight-loss plans and programs generally fall into several categories: LCDs generally follow the guidelines established by national public-health entities such as the Department of Agriculture, the American Heart Association, and the American Dietetic Association, which provide food choices that contain healthful amounts of carbohydrates, proteins, and heart-healthy fats. These guidelines enable people to tailor their eating to their individual food preferences. This approach is generally safe and effective for most people and typically results in a 5 to 10 percent reduction in body weight over a six-month period. LCDs include Weight Watchers, the Duke Diet and Fitness Center eating plan, and many medical school-based programs. Because VLCDs are usually based on a special medically engineered, total meal-replacement product, people must spend significant time transitioning back to eating a balanced diet of regular foods. Those who transition from a VLCD to an LCD (and are committed to a balanced and healthful lifestyle) maximize the likelihood of maintaining their weight loss over the long term. Interestingly, clinical studies show that after one year, the weight loss associated with LCDs and VLCDs is similar. In other words, people who have followed VLCDs regain more weight, on average, than those who have followed LCDs. Despite this, VLCDs are sometimes more beneficial for some people. VLCDs include OPTIFAST and Health Management Resource. Experts stress the importance of choosing proteins that contain heart-healthy fats whenever possible, and they don't recommend embarking on a very low-carbohydrate diet without the supervision of a physician trained in weight management. While there is no evidence that low-carb diets are bad for the heart—and some recent studies have shown them to be as effective as other types of diets—organizations including the American Heart and American Diabetic associations continue to recommend a balanced diet for the general public. Low-carb plans include Atkins, South Beach, and Zone. Very-low-fat diets include Dean Ornish and Pritikin. Because meal replacements are portion controlled and nutritionally balanced, they eliminate the need to plan and prepare meals. This can be very helpful when situations arise that interfere with planned meals and can be an excellent strategy for busy people who might otherwise skip meals or opt for fast food. Experts recommend that people considering a meal-replacement strategy for weight loss consult a registered dietitian to ensure that their overall diet is nutritionally balanced and contains the appropriate amount of calories. Substitution plans include Slim-Fast and Health Management Resources' partial meal-replacement option. Because many people report becoming bored with the selections, meal-delivery programs can be difficult to sustain over the long term. Experts recommend consulting a registered dietitian prior to beginning one to discuss nutritional adequacy and sustainable approaches to self-selected meal planning. Packaged-meal plans include Jenny Craig, Nutri-System, Seattle Sutton's Healthy Eating, and Diet-to-Go. Weight-management specialists realize that people have a lot of choices and that deciding on the path to follow can be daunting and confusing. They recommend that people look for a program that takes a sound, evidence-based approach to weight management and considers overall health and wellness from a realistic, patient-centered perspective. People should avoid diets that: Although the precise answer to this question varies from person to person because of different rates of metabolism (the burning of calories) and other factors, experts offer this rule of thumb: To lose approximately 1 pound per week, reduce the number of calories you currently eat each day by 500 calories and make a conscious effort to move more. In other words, to lose 1 pound, you must burn 3,500 calories. Weight-management experts offering the following additional tips: Speak with a physician or dietitian before beginning any weight-loss program, particularly if you are in poor health or have been diagnosed with a medical condition. Getting regular medical checkups is always a good idea, so plan to schedule one before starting a weight loss and exercise plan. This is especially important for: If you have any questions or concerns about safely managing your weight, speak with a qualified medical professional. When used in conjunction with a balanced and healthful lifestyle, several other interventions have been identified as useful weight-loss tools. Safe only under a doctor's supervision, they include: Weight-loss surgery. Bariatric (weight-loss) surgery is typically recommended only for people who have been unsuccessful with other reasonable weight-loss efforts and whose BMIs are higher than 40 or those with BMIs higher than 35 in addition to significant weight-related health risks. Studies show that, for those who qualify, bariatric surgery is more effective in the long term than nonsurgical weight-reduction programs and that weight-related health issues (especially diabetes) are also very successfully reduced and frequently eliminated. Several types of bariatric surgeries are available today; the type chosen is dictated by a patient's health status and individual circumstances and must be decided upon by that person and his or her bariatric surgeon. Drug therapies. Medical professionals usually recommend drug therapies only for people with BMIs of at least 30—or greater than 27 in those suffering from significant weight-related health problems. Several types of drugs promote weight loss, including: Appetite suppressants. These drugs typically target the brain's hunger center and may reduce the desire to eat. While their short-term use may be effective, most medical professionals do not typically prescribe appetite suppressants for weight control because of their limited efficacy and potentially addictive nature. There are currently several prescription appetite suppressants approved for short-term use— including mazindol, diethylpropion, and phentermine—and one, Meridia(tm), approved for long-term use in treating obesity. Meridia has demonstrated reasonable efficacy in multiyear trials and works primarily by targeting the brain's satiety mechanisms, making people feel full sooner and therefore reducing their desire to consume more calories. Because its most common side effect is increased blood pressure, this drug is not recommended for people with hypertension. Lipase inhibitors. These drugs target the digestive system, not the brain, and prevent digestive enzymes from breaking down about one third of the dietary fat that is consumed so that the body cannot absorb it. The most common side effects of lipase-inhibiting drugs are "oily leakage" and loose stool, a result of the fat that passes undigested through the body. People who take these drugs must adhere to the manufacturers' specific nutritional recommendations. Xenical(tm) and alli(tm) are two lipase inhibitors. Approved for long-term use in treating obesity, Xenical is a prescription drug that prevents the body from absorbing about one third of consumed calories that come from fat. Alli(tm), the first over-the-counter weight-loss drug approved by the Food and Drug Administration, contains about half the active ingredient found in Xenical and has similar side effects. Studies have found Meridia, Xenical, and alli to be effective in promoting weight loss over a two-year period when combined with exercise and a healthful and balanced diet. Consumers should note that, except for alli, there are no FDA-approved weight-loss aids available without a prescription. This means that the other available products have not been appropriately scientifically tested for safety or efficacy, despite claims of being "clinically tested" or "doctor-approved." Consumers should not use these products. Liposuction. This procedure, which physically removes fat cells from the body, is not an approved method of weight control and is strictly intended for cosmetic enhancement. The use of liposuction as a weight-control method is extremely dangerous. People who have lost a significant amount of weight, however, may wish to undergo liposuction or other cosmetic procedures, such as skin removal. These procedures should be performed only by a licensed professional with considerable experience treating people who have lost a lot of weight. Fasting for weight loss should never be done. As for losing weight by other means, consumers are barraged with options in the latest, greatest miracle category, from sauna suits and other unnecessary and expensive equipment to creams claimed to dissolve fat cells and herbal compounds purported to burn calories. At best, these products create an illusion of weight loss, experts say; they are not safe, properly labeled, or medically researched. Consumers should avoid any product that is not FDA-approved. Last reviewed on 1/28/10 U.S. News's featured content providers were not involved in the selection of advertisers appearing on this website, and the placement of such advertisement in no way implies that these content providers endorse the products and services advertised. Disclaimer and a note about your health.
<urn:uuid:5a9d0e43-baa4-42ca-95cf-e3f57ee8cdd2>
seed
Epidemiology of poliomyelitis in the United States one decade after the last reported case of indigenous wild virus-associated disease. Clin Infect Dis. 1992 Feb;14(2):568-79. PMID: 1554844 Division of Immunization, National Center for Prevention Services, Centers for Disease Control, Atlanta, Georgia 30333. Poliomyelitis caused by wild poliovirus has been virtually nonexistent in the United States since 1980, and vaccine-associated paralytic poliomyelitis (VAPP) has emerged as the predominant form of the disease. We reviewed national surveillance data on poliomyelitis for 1960-1989 to assess the changing risks of wild-virus, vaccine-associated, and imported paralytic disease; we also sought to characterize the epidemiology of poliomyelitis for the period 1980-1989. The risk of VAPP has remained exceedingly low but stable since the mid-1960s, with approximately 1 case occurring per 2.5 million doses of oral poliovirus vaccine (OPV) distributed during 1980-1989. Since 1980 no indigenous cases of wild-virus disease, 80 cases of VAPP, and five cases of imported disease have been reported in the United States. Three distinct groups are at risk of vaccine-associated disease: recipients of OPV (usually infants receiving their first dose), persons in contact with OPV recipients (mostly unvaccinated or inadequately vaccinated adults), and immunologically abnormal individuals. Overall, 93% of cases in OPV recipients and 76% of vaccine-associated cases have been related to administration of the first or second dose of OPV. Our findings suggest that adoption of a sequential vaccination schedule (inactivated poliovirus vaccine followed by OPV) would be effective in decreasing the risk of VAPP while retaining the proven public health benefits of OPV.
<urn:uuid:8102ff19-b38b-4922-bc43-1b1066879528>
seed
The Molecular Biology of Sickle Cell Anemia In Part I we learned that sickle cell anemia was recognized to be the result of a genetic mutation, inherited according to the Mendelian principle of incomplete dominance. Initially, you will recall, it was not clear what the actual defect was that caused sickling. Various experiments, as described at the end of Part I, indirectly narrowed down the site of the defect to the hemoglobin molecule. The most direct evidence that mutation affected the hemoglobin molecule came from a then-new procedure known as electrophoresis, a method of separating complex mixtures of large molecules by means of an electric current. To view and electrphoresis apparatus in progress, click here.When hemoglobin from people with severe sickle cell anemia, sickle cell trait, and normal red blood cells was subjected to electrophoresis, the following interesting results were obtained It was clear that the hemoglobin molecules of persons with sickle cell anemia migrated at a different rate, and thus ended up at a different place on the gel, from the hemoglobin of normal persons (diagram, parts a and b). What was even more interesting was the observation that individuals sickle cell trait had about half normal and half sickle cell hemoglobin, each type making up 50% of the contents of any red blood cell (diagram part c). To confirm this latter conclusion, the electrophoretic profile of people with sickle cell trait could be duplicated simply by mixing sickle cell and normal hemoglobin together and running them independently on an electrophoretic gel (diagram part d). These results fit perfectly with an interpretation of the disease as inherited in a simple Mendelian fashion showing incomplete dominance. Here, then, was the first verified case of a genetic disease that could be localized to a defect in the structure of a specific protein molecule. Sickle cell anemia thus became the first in a long line of what have come to be called molecular diseases. Thousands of such diseases (most of them quite rare), including over 150 mutants of hemoglobin alone, are now known. B. Sickle Cell and Normal Hemoglobin But what was the actual defect in the sickle cell hemoglobin? Although we will investigate this question in more detail in a later case study (Web Page on Protein Structure), for now it will be helpful at least to outline the background of the discovery of just what it was that made sickle cell hemoglobin different from normal hemoglobin. It is the story of one of the first identifications of the molecular basis of a disease. Again, Linus Pauling at Caltech, one of the most productive and imaginative of twentieth-century biological chemists (with co-workers Harvey Itano, a graduate of St. Louis University Medical School, I.C. Wells and S.J. Singer) turned his attention to determining the actual difference between normal and sickle cell hemoglobin molecules. Breaking the protein molecules down into shorter fragments called peptides, Pauling and co-workers subjected these fragments to another separatory technique called paper chromatography. When this procedure is applied to samples of normal and mutant (sickle) hemoglobin molecules (alpha and beta chains) that had been broken down into specific peptides, all the spots are the same -- except for one crucial spot (shown darkened in the final chromatogram below), which represents the difference between sickle cell and normal hemoglobin. The fact that the spots migrate to different places on the chromatogram indicates their molecular structures must be somewhat different. Pauling and his colleagues were convinced that the difference might be no more than one or two amino acids, but it was left to biochemist Vernon Ingram at the Medical Research Council in London to demonstrate this directly. Taking the one aberrant peptide and analyzing it one amino acid at a time, Ingram showed that sickle cell hemoglobin differed from normal hemoglobin by a single amino acid, the number 6 position in the beta chain of hemoglobin. That one small molecular difference made the enormous difference in people's lives between good health and disease. C. Discovering the Difference Between Normal and Sickle-Cell Hemoglobin Royer Jr., W.E. "High-resolution crystallographic analysis of co-operative dimeric hemoglobin," J. Mol. Biol., 235, 657. Oxyhemoglobin PDB coordinates, Brookhaven Protein Data Bank. In overall structure, as we have already learned, a complete hemoglobin molecule consists of four separate polypeptide chains (i.e., each a long string, or polymer, of amino acids joined together end-to-end) of two types, designated the alpha and beta chains. The two a chains are alike (meaning they have the exact same sequence of amino acids), while the two beta chains are also alike. You can rotate the molecule around, by clicking on it and hold the mouse button . Step 1: Highlight the heme Make sure you can distinguish the four subunits (the two a and the two b chains). Note the relative positions of the a and the b chains to each other. Hemoglobin is called a tetramer because the molecule as a whole is made up of four subunits, or parts. Find the porphyrin-based heme group and note how it is "sheltered" in a kind of groove within each polypeptide chain. Step 2: Remove outer parts of the molecule You can also switch from one to the other of several conventional modes of representing molecular structure: the space filling, ball and stick, wire, and ribbon forms, by holding down the mouse button and choosing-Display. As you will learn later, each gives you a different kind of information about the molecule's overall shape and some of its specific structural features. In sickle cell hemoglobin the two alpha chains are normal; the effect of the mutation resides only in the # 6 position in the two beta chains (the mutant beta chains are referred to as "S" chains, as explained in the Terminology Box below). As mentioned above, each a and b polypeptide is folded around and shelters a special ring structure, the heme group, consisting of a porphyrin ring at whose center is an iron atom bound by four coordinate covalent bonds to four nitrogens of the porphyrin. It is this iron to which the oxygen binds (. The whole porphyrin structure is called the prosthetic group, a general term in protein chemistry to refer to non-polypeptide portions of the molecule that are usually the functionally active sites. Click here for the heme group bound to histidine residue. Sickle hemoglobin tutorial by Eric Martz of the University of Massachusetts The chart below summarizes some of the terminology we have encountered in discussing the various kinds of hemoglobins and their clinical manifestations. Study this chart and learn the specific meanings of these terms. They will help you keep clear exactly what aspect of sickle cell anemia, or what component of the genetic or molecular system is being discussed. The difference in the one amino acid in the b chains of sickle cell hemoglobin must affect the way the molecules interact with one another. Pauling made a remarkable prediction about this difference in 1949, when he wrote: Many years later is was shown that the amino acid that is substituted in the # 6 position in the beta chain forms a protrusion that quite accidentally fits into a complementary site on the beta chain of other hemoglobin molecules in the cell, thus allowing the molecules to hook together likes pieces of the play blocks called legos. The result is, as Pauling predicted, that instead of remaining in solution sickle cell hemoglobin molecules will lock together (aggregate) and become rigid, precipitating out of solution and causing the RBC to collapse. Early electron micrographs taken at the time showed dramatically that in sickle-cell hemoglobin, the molecules line up into long fibers inside the cell (see Fig. 4) forming trapezoidal-shaped crystals that have much the same shape as a sickled cell. Why this happens when oxygen tension is low and the hemoglobin becomes deoxygenated, will be discussed later. It is interesting to note that in vitro (using solutions of hemoglobin extracted from red blood cells) studies of deoxygenation and reoxygenation of sickle-cell hemoglobin indicate the process is reversible, that is, as oxygen concentration is lowered hemoglobin molecules polymerize and form crystals, but as oxygen concentration is increased again the hemoglobin molecules can depolymerize and return to their soluble state. This can be written as: However when similar in vivo experimental tests are run on sickle-cell hemoglobin in whole red blood cells, the process was only reversible up to a certain duration of exposure time. After several hours, the process could no longer be reversed. The reasons for this relate back to our earlier question of what was the exact effect of the mutation on the red blood cell and its contents. When a long-term sickled cell is broken open and a "ghost" prepared, even with the hemoglobin extracted, the cell retains its sickled shape. In-Text Question 5: What might you hypothesize to be the cause of this phenomenon and how would it relate to the earlier conclusion that hemoglobin, not other cell components, are the site of the mutation's effect? The notion that sickle cell anemia results from a specific amino acid substitution in a polypeptide was given further support by discovery, around the same time, of other hemoglobin variants with distinct molecular and physiological properties. In the mid 1940s it was found that Hemoglobin F, or fetal hemoglobin, has an electrophoretic mobility and a different affinity (higher) for oxygen than adult hemoglobin (fetal hemoglobin is produced by the fetus during gestation, and is slowly replaced by synthesis of the adult form in the first few months of life; the higher affinity of fetal hemoglobin for oxygen facilitates the transfer of oxygen across the placenta from the mother's blood to that of the fetus). Hemoglobin F was also found to have a different amino acid sequence, indeed producing a distinctive chain, the g (gamma) chain instead of the b chain, during most of fetal life (for more details see Stryer, p. 154). Then, in the early 1950s two other hemoglobin-based conditions, designated Hemoglobin C and Hemoglobin D, were discovered by Harvey Itano in two separate families. These hemoglobins were also found to have different eletrophoretic mobilities and different amino acid sequences, as well as unique physiological effects (not as severe, however, as sickle cell hemoglobin). To learn more about other hemoglobinopathies, click on the following website http://sickle.bwh.harvard.edu/hemoglobinopathy.html Taken together, these examples all supported the general paradigm that mutations produced alterations in the amino acid sequence of proteins that, in turn, had significant effects on the protein's function. Such a conception, coming as it did at just about the time of the development of the Watson-Crick model of DNA in 1953, helped launch the revolution in molecular biology that we are still experiencing today. We will also explore in a later case study how at the DNA level the genetic mutation for sickle cell hemoglobin alters the specific structure of the beta polypeptide chain.
<urn:uuid:df4a0e19-4557-4d7e-a5d0-72d0f30373ac>
seed
While much progress has been made in Alzheimer’s disease (AD) research, the exact cause is unknown. Many scientists believe that a buildup of two abnormal structures in the brain play an important role. These structures are called amyloid plaques and neurofibrillary tangles. Amyloid plaques are dense, mostly insoluble clumps of protein fragments. They leave a highly damaging substance outside and around the brain’s nerve cells. People with AD have a buildup of these plaques in their hippocampus. The hippocampus is involved with memory, including how immediate or short-term memories are stored into long-term memories. Your ability to function in everyday life also can be affect by an unhealthy hippocampus. Everything you do involves your ability to acquire, store, and retrieve memories. This can be anything from remembering if you ate lunch, to recognizing a loved one, to recalling if you turned off the stove. The hippocampus also is essential to both spatial memory and spatial navigation. Spatial memory is how you retain information about your surroundings. Spatial navigation involves how you travel to a destination. Research suggests early hippocampus damage may explain why AD sufferers often wander and get lost. Neurofibrillary tangles are insoluble, twisted fibers that clog the brain from the inside-out. Brain nerve cells (called neurons) have a special transport system called microtubules. It acts like railroad tracks, and safely guides and transports nutrients, molecules, and information to other cells. An important fiber-like protein called tau is responsible for keeping those microtubules stable. The tau’s chemical make-up is altered in people with AD. The threads of tau become tangled and twisted. Thus, the microtubules become unstable and disintegrate, which collapses the entire neuron transport system. This series of events may be related to the first visible sign of AD: memory loss. More research is needed to determine if amyloid plaques, tangles, and tau are a direct cause of AD. Researchers are certain of a genetic component to AD. In the elderly, the gene most associated with the onset of symptoms is located on chromosome 19. It’s called apolipoprotein E (APOE). There are several versions (alleles) of APOE. According to the National Institute on Aging, about 40 percent of people who develop AD later in life have an APOE e4 allele. A blood test can determine if you have it. But it’s still not possible to predict who will develop AD. Some people with one or even two APOE e4 alleles never develop the disease. Others who get AD don’t have any APOE e4 alleles. Still, an “AD gene” does increase your risk. One of these newly identified genes that increase your risk is CD33. It causes the body to not eliminate as many amyloid plaques as it should. And scientists have long believed amyloid plaques — and more specifically their building up to toxic levels — likely plays a key role in degradation of brain neurons. Genetic studies of families with a history of early-onset AD have identified mutations in three different genes. - APP (on chromosome 21) - PSEN-1 (on chromosome 14) - PSEN-2 (on chromosome 1) These genes are thought to be responsible for the rare form of AD that afflicts men and women in their early 30s or 40s. They are believed to help produce amyloid protein, which forms amyloid plaques and are the hallmarks of AD. These mutated genes do not play a role in the more common late-onset AD. Approximately 50 percent of people who have a parent with early-onset AD will likely inherit the genetic mutation and develop the disease. For those young individuals where neither parent had early-onset AD, research has found that often a second-degree relative (e.g., an uncle, aunt, and/or grandparent) suffered from the condition.
<urn:uuid:3c1d4afe-56bb-4177-9fd9-e11e9746e2e9>
seed
There are many developed clinical tests and examinations that may help in determining menopause. Most of these tests involve drawing a little blood from your veins and arteries. You could feel slight pain and discomfort as a result. In some cases, it could cause excessive bleeding. Infection is possible. Sometimes, though rarely, you could experience dizziness or could lose consciousness for some time. But, doctors take all necessary precautions before undertaking any of these tests. Before going to the doctor in order to perform clinical tests, you may start from the readily available home self-test kits, which may be not so accurate and scientifically conclusive as tests in the frames of the medical institution, but they may offer you pretty good initial insight on your medical condition as well. Types of Self-Tests Available A number of self-tests are now being marketed to individual consumers. Most of these tests are inexpensive and easy to perform, and many offer surprisingly accurate results. The FDA has also approved many "at home" menopause test kits. It is important to keep in mind that menopause tests analyze different hormones in your body. Although each of these hormones plays an important role in your reproductive system, you should get a menopause test that you feel will provide you with the results you are most interested in. It is also suggested that tests are repeated on a regular basis in order to ensure the most accurate results possible. Follicle Stimulating Hormone Tests (FSH Tests) FSH levels and menopause are related. FSH tests measure levels of a hormone called follicle stimulating hormone that is present in your body. FSH is responsible for stimulating ovulation during your monthly cycle. FSH rises each month in order to encourage egg follicles to be released from the ovaries and travel through the fallopian tubes for fertilization. As FSH rises, levels of estrogen will drop. Once the egg has been released, your body recognizes the need to either prepare for pregnancy or produce a period, causing estrogen levels to rise and FSH levels to drop. FSH tests can tell you if your FSH levels are particularly high. A high level of FSH may indicate that your body is trying to stimulate ovulation but isn't getting anywhere with it. This is generally one of the initial signs of menopause. Normal FSH levels are typically between 5 and 25 mlU/mL. An FSH test that tells you that your FSH levels are higher than 25 mlU/mL may indicate that you are entering perimenopause, the initial stage of menopause. If your FSH levels are higher than 50 mlU/mL, then you are in menopause. Taking the Test FSH self-tests are available as both urine and saliva tests and can be purchased online or at your pharmacy. Urine tests consist of a stick that you place in your urine stream and allow to process until it produces a result. Chemicals in the test device react with FSH and produce a color. Read the instructions with the test you buy to learn exactly what to look for in this test. Saliva tests involve you taking a sample of your saliva and sending it to a lab where it can be processed. Results are then mailed back to you. Urine FSH tests are FDA approved and typically about 90% accurate. Saliva tests are not as accurate, and tend to be influenced by environmental stressors, including cigarette smoke, certain foods, hormone replacement therapy, or oral contraceptives. However, saliva tests can give you an excellent idea of whether or not you should pursue further menopause testing. Both of these tests should be performed on particular dates of you cycle. It is important to read the directions on the test carefully. If you are no longer menstruating, you can perform the test at any time. A follow-up test should be performed 5 to 7 days after the first test. It is also helpful to conduct a baseline test before your body begins to be menopausal; a test around age 35 should be helpful in establishing your "normal" FSH levels. Progesterone and Estradiol Tests These tests measure levels of individual sex hormones in your body. Both progesterone and estradiol, a type of estrogen, play important roles in triggering reproductive functions. Low progesterone or estrogen levels may indicate the beginning of menopause. These tests are typically saliva tests, though your doctor can also perform them using a blood sample. Hormone tests are available for order online at relatively low cost. Saliva tests may not be as accurate as a blood test, because they can only measure the amount of unbound or "free" estrogen or progesterone in the body. Your body also stores estrogen and progesterone by binding them to certain receptors, but only blood tests can measure levels of these bound hormones. Like the FSH saliva test, this test is sent in to a laboratory and then results are mailed back to you. Normal estrogen levels usually measure between 30 and 400. Estrogen levels lower than 30 could indicate the onset of menopause. Things to Remember When taking a self-test for menopause it is important to remember that the results are merely an indication that you might be entering into a stage of menopause. The tests themselves cannot correctly determine if you are actually in menopause they merely measure levels of certain hormones. An abnormal hormone level may indicate menopause or it could be a symptom of another complication. All tests should be repeated on a fairly consistent basis, because hormone levels do fluctuate. Do not use menopause tests as a form of birth control. Even if you test positive, you could still be ovulating and can still get pregnant. You should also discuss the results of your test with your health care provider. She may be able to use these test results along with evidence of any signs of menopause in order to make a diagnosis and prescribe an appropriate menopause treatment. Some home menopause tests are identical to the one your doctor uses. However, doctors would not use this test by itself. Your doctor would use your medical history, physical exam, and other laboratory tests to get a more thorough assessment of your condition. Note that FSH test may produce highly variable results during the time when periods are irregular just before periods cease permanently. For example, a woman might skip 3 periods, and then have periods for a few months, and then skip several periods again. During this time of irregular periods the FSH level can fluctuate tremendously. Therefore, no matter what the received results are, you should visit your doctor to confirm your menopausal stage and to get observed in case of the extreme appearance of menopausal symptoms. Sources and Additional Information:
<urn:uuid:489f1a12-8c2f-45f7-9d89-5790135b76e9>
seed
From Our 2012 Archives Sleep Apnea May Be Linked to Nerve Damage in Diabetics Latest Diabetes News The severity of this type of nerve damage -- called diabetic peripheral neuropathy -- is linked with the extent of sleep apnea and the degree of low blood oxygen levels that occur while patients sleep, the researchers found. People with obstructive sleep apnea subconsciously awaken many times a night -- even dozens of times an hour -- because their airways close, disrupting their breathing. Those with diabetic peripheral neuropathy may have numbness or tingling in their extremities, or damage to their major organs. The study of 234 adults with type 2 diabetes found that sleep apnea was independently associated with diabetic peripheral neuropathy even after the researchers accounted for a number of other possible factors, including obesity, ethnicity, gender, age at diabetes diagnosis, and the length of time a person had diabetes. The findings were published online ahead of print in the American Journal of Respiratory and Critical Care Medicine. "Obstructive sleep apnea is known to be associated with inflammation and oxidative stress, so we hypothesized that it would be associated with peripheral neuropathy in patients with type 2 diabetes," lead author Dr. Abd Tahrani, a clinical lecturer in endocrinology and diabetes at the University of Birmingham in England, said in a news release from the American Thoracic Society. However, while the study uncovered an association between obstructive sleep apnea and peripheral neuropathy in diabetic patients, it did not prove a cause-and-effect relationship. Further research is needed to determine the role of sleep apnea and low blood oxygen levels in the development and progression of nerve damage in patients with type 2 diabetes, and to assess the potential impact of continuous positive airway pressure treatment on diabetic peripheral neuropathy, the study authors said. Continuous positive airway pressure treatment, or CPAP, keeps obstructive sleep apnea patients' airways open while they sleep. -- Robert Preidt Copyright © 2012 HealthDay. All rights reserved. SOURCE: American Thoracic Society, news release, June 15, 2012 - Allergic Skin Disorders - Bacterial Skin Diseases - Bites and Infestations - Diseases of Pigment - Fungal Skin Diseases - Medical Anatomy and Illustrations - Noncancerous, Precancerous & Cancerous Tumors - Oral Health Conditions - Papules, Scales, Plaques and Eruptions - Scalp, Hair and Nails - Sexually Transmitted Diseases (STDs) - Vascular, Lymphatic and Systemic Conditions - Viral Skin Diseases - Additional Skin Conditions
<urn:uuid:9c9eca00-aed2-4f28-ac64-ad877f2b946f>
seed
Circulating tumor cells are cells that break off from a cancer tumor and move into the blood stream. Doctors sometimes test for circulating tumor cells to see if breast cancer cells are active in areas of the body beyond the breast. The number of circulating tumor cells also affects a cancer's prognosis. Right now, finding 5 or more circulating breast cancer tumor cells in 7.5 milliliters (ml) of blood suggests a worse prognosis, and finding fewer than 5 circulating tumor cells suggests a better prognosis. A study used a mathematical model to analyze how higher levels of circulating tumor cells affected the prognosis of people diagnosed with breast cancer. The model suggests that the more circulating tumor cells in the blood sample, the greater the person's risk of dying from breast cancer. Some day, doctors may be able to better estimate a person's prognosis based on the specific number of circulating breast cancer tumor cells rather than looking at only whether there are more or less than 5 cells in a blood sample. The link between higher numbers of circulating tumor cells and worse prognosis was found in various types of breast cancer: - hormone-receptor-positive cancer - hormone-receptor-negative cancer - HER2-positive cancer - HER2-negative cancer Still, the link between circulating tumor cells and prognosis was stronger in breast cancers with certain characteristics. The link was the strongest in estrogen-receptor-positive, HER2-positive breast cancers. The link was the weakest in triple-negative breast cancers (estrogen-receptor-negative, progesterone-receptor-negative, and HER2-negative). While these results are interesting, more research is needed before doctors use specific numbers of circulating breast cancer tumor cells to help make treatment decisions. Stay tuned to Breastcancer.org's Research News to learn more about lab research that may lead to better ways to diagnose and treat breast cancer.
<urn:uuid:209f2cc4-cb0a-472a-a3e4-6a8bbfbc22ab>
seed
The symptoms of low testosterone (or "low T") in males can vary depending on the cause of the low level and the age at which it occurs. In male hypogonadism, a condition in which the body is unable to produce normal amounts of testosterone, symptoms may include underdeveloped genitalia, delayed puberty, and a lack of secondary sexual characteristics such as a deeper voice and facial hair. In middle-aged or older men experiencing age-related decreases in testosterone, symptoms may include low energy, depressed mood, low sex drive, and erectile dysfunction (ED or impotence). No matter what the cause, symptoms alone are not enough for a diagnosis of low testosterone: A blood test is needed to confirm that a man's testosterone level is, indeed, low. Symptoms of Hypogonadism Male hypogonadism can occur at any age, as a result of problems affecting the testicles or the pituitary gland. Signs of hypogonadism in infants include: - Ambiguous genitalia - Female genitalia (in a genetically male child) - Underdeveloped male genitalia In boys, hypogonadism is associated with delayed puberty and can cause symptoms such as: - Lack of development of muscle mass - No deepening of the voice or growth of body hair - Slow increase in size of penis or testicles - Arms and legs that grow out of proportion to the rest of the body In adult men, symptoms of hypogonadism include: - Lack of fertility - Low sex drive - Erectile dysfunction - Sparse facial or body hair - Growth of breast tissue Blood testosterone levels in men with hypogonadism are very low and do not fluctuate from day to day, the way they do in healthy men. Symptoms of Age-Related Testosterone Decline Low testosterone in men can also cause nonspecific symptoms such as: - Sleep disturbances Not all men with age-related low testosterone have — or are bothered by — symptoms. In addition, the level at which symptoms occur varies from man to man. Nonspecific signs and symptoms such as fatigue, sleep problems, and low mood can also be caused by other factors such as medication side effects, depression, and excessive alcohol use. Symptoms of Low Testosterone in Women Women also produce testosterone — in the ovaries and adrenal glands — and they also experience a normal drop in testosterone levels in the time leading up to menopause. This drop may be associated with a decrease in libido (sex drive), low energy, persistent fatigue, and depressed mood. Hypogonadism and age-related low testosterone are diagnosed with blood tests that measure the level of testosterone in the body. The Endocrine Society and the American Association of Clinical Endocrinologists recommend testing for suspected low T with a total testosterone test performed in the morning (when testosterone levels tend to be highest in young men, although this isn't necessarily the case in older men). The test is often repeated on another day if the results show a low T level. Sometimes a test for "free" or "bioavailable" testosterone is also performed. The majority of testosterone in the blood is bound to one of two types of protein — either albumin or sex hormone binding globulin (SHBG) — while a small percentage is unbound, or free. The portion of testosterone that is bound to SHBG is biologically inactive but, because the bonds between testosterone and albumin are weaker than those between testosterone and SHBG, the portion bound to albumin is biologically active. Bioavailable testosterone therefore includes free testosterone and albumin-bound testosterone. A number of medical conditions and medications can raise or lower the amount of SHBG in the blood, consequently altering the amount of bioavailable testosterone. In those situations, measuring the amount of bioavailable testosterone gives a more accurate indication of the amount of biologically active testosterone in a person's system. Last Updated: 2/23/2015
<urn:uuid:f6863ccb-8011-4a28-b085-3b1de07d986a>
seed
|Home | About | Journals | Submit | Contact Us | Français| In May of 2010, two communities (Truenococha and Santa Marta) reported to be at risk of vampire bat depredation were surveyed in the Province Datem del Marañón in the Loreto Department of Perú. Risk factors for bat exposure included age less than or equal to 25 years and owning animals that had been bitten by bats. Rabies virus neutralizing antibodies (rVNAs) were detected in 11% (7 of 63) of human sera tested. Rabies virus ribonucleoprotein (RNP) immunoglobulin G (IgG) antibodies were detected in the sera of three individuals, two of whom were also seropositive for rVNA. Rabies virus RNP IgM antibodies were detected in one respondent with no evidence of rVNA or RNP IgG antibodies. Because one respondent with positive rVNA results reported prior vaccination and 86% (six of seven) of rVNA-positive respondents reported being bitten by bats, these data suggest nonfatal exposure of persons to rabies virus, which is likely associated with vampire bat depredation. Rabies is caused by single-stranded negative-sense RNA viruses in the genus Lyssavirus. Rabies virus (RABV; genotype I) is the most prolific of the 12 viral species classified within the genus, and it is responsible for greater than 55,000 human deaths annually.1 Typically, RABV is transmitted in the saliva after the bite of an infected mammal. In the Americas, bats and carnivores are the major reservoirs of RABV.2 Multiple insectivorous bat species play a role in RABV transmission to humans in the United States.3 In Latin America, RABV is transmitted principally by the common vampire bat (Desmodus rotundus), although several other Neotropical bat species play a role in RABV circulation.4–8 Rabies is the most recognized human health risk from bats in Latin America, with dual impacts for public health and agriculture.9 Wide circulation of RABV among vampire bats throughout their geographic range is shown by extensive reports of vampire bat-associated RABV infections in bats, humans, and cattle throughout Latin America.8,10–12 In Perú, outbreaks of rabies linked to vampire bat bites have been documented among populations living in the Amazon region over the past several decades.4,12–14 Approximately 81% (113 of 139) of the human rabies cases reported in Perú from 1996 to 2010 were associated with vampire bats.12,15 In April of 1996, an outbreak of rabies resulted in at least nine human deaths in two Amazonian villages. Samples from the victims were characterized as RABV associated with D. rotundus.13 Between December of 2006 and February of 2007, an outbreak involving 527 persons bitten by vampire bats claimed at least 23 deaths in southeastern Perú, all implicating D. rotundus.4 In 2009, 19 cases of human rabies transmitted by vampire bats were reported from five outbreaks located in the Amazon region.16–18 From December of 2009 to February of 2011 in the District of Nieva, 14 suspected human rabies deaths were reported, and two of the cases were confirmed by direct fluorescent antibody testing.19 Most recently, from February to July of 2011, at least 20 suspected human rabies cases (18 children and 2 adults) among indigenous persons from the Aguaruna tribe in the District of Imaza were reported.20 Common risk factors for human RABV infection in the Amazon include poor housing conditions, small population groups in remote areas, poor access to health services, and a general lack of awareness or cultural barriers regarding the transmission of rabies by bats. There is ample evidence of frequent depredation by vampire bats on humans and livestock in the Peruvian Amazon but inadequate investigation of human response to RABV exposures among persons at risk living in this region. Rabies has the highest case fatality rate of any conventional infectious disease, approaching 100%. The likelihood of a productive rabies infection after exposure to a lyssavirus is known to depend on a variety of factors, including but not limited to dose, route of exposure, site of exposure, variant, host genetic makeup, pre- and/or post-exposure prophylaxis (PreEP and PEP, respectively), etc.21 All mammals are susceptible to lyssavirus infection, but species-level variation in susceptibility has long been recognized.2 Among reservoir species, foxes and other canids seem to be quite susceptible to RABV infection, characterized by little to no (e.g., 0–5%) rabies virus neutralizing antibody (rVNA) seroprevalence (i.e., animals developing virus neutralizing antibodies after RABV exposure) among natural populations.22,23 Contrastingly, bat populations seem to be less susceptible to RABV infection and are characterized by relatively high (e.g., 5–50%) rVNA seroprevalence in the wild.24–26 One experiment suggested that non-human primates might be resistant to infection from rabid bats,27 but bat-associated human rabies deaths worldwide show human susceptibility to lyssavirus infection by bat bite.3,28–30 The work by Bell31 argued provocatively that abortive RABV infections were readily observed and reproducible in animals, and thus, they should be considered a possible outcome for humans. Reports of human survival after rabies infection, despite clinical presentation, are quite rare in the literature.32–37 However, a recent case of presumed abortive rabies infection in the United States highlights a rare event of human survival after presentation of clinical symptoms and minimal intervention treatment.38 The objective of this study was to investigate risk factors for bat and RABV exposure in Amazonian communities that were suspected to be at high risk of vampire bat depredation based on their proximity to recent outbreaks in Perú and rural living conditions. A survey questionnaire was used to capture demographic information of the study populations, salient details of any previous bat exposure, and self-reported vaccination history among respondents sampled. Two communities were surveyed in May of 2010 in the Province Datem del Marañón in the Loreto Department of Perú (Figure 1). Samples were collected as part of a survey to evaluate bat–human interactions and rabies risk in the Amazon. The survey protocol was approved by the Centers for Diseases Control and Prevention (United States) and the Hospital Nacional Dos de Mayo (Perú) Institutional Review Boards in compliance with all applicable federal regulations governing the protection of human subjects. Both communities could only access the nearest health post in the town of San Lorenzo by river. The community of Truenococha is in the District of Pastaza, had an estimated population of 111, is of mestizo (i.e., mixed) cultural descent, is approximately 2 hours from the nearest health post by motorized boat (~50 km), and was raising approximately 12 head of cattle at the time of the study. The community of Santa Marta is in the District of Cahuapanas, had an estimated population of 205, is of indigenous cultural descent, is located approximately 6–8 hours from the nearest health post by motorized boat (~85 km), and was not raising any cattle at the time of the study. Residents from both communities were enrolled in a knowledge, attitudes, and practices (KAP) survey about prior bat exposure, health-seeking behaviors, knowledge of rabies, and prior rabies PreEP or PEP. A 2- to 5-mL blood sample was collected from consenting respondents by an aseptic technique. Sera were separated by centrifugation, stored in liquid nitrogen until transfer to ×80°C, and shipped from Perú to the Centers for Disease Control and Prevention in Atlanta, Georgia. Sera were screened for rVNA by the rapid fluorescent focus inhibition test (RFFIT) as described.39 Briefly, sera were screened individually at a 1:5 and 1:25 dilution against a constant dose of rabies virus (50 focus forming doses [FFD50]). Sera that exhibited complete neutralization of RABV at a 1:5 dilution were considered rVNA positive using the recommendations of the Advisory Committee on Immunization Practices (ACIP).21 Sera that were rVNA positive were screened at additional dilutions up to 1:390,625 to determine endpoint titers using the Reed–Muench method.40 These data were converted to international units per milliliter−1 by comparison with the US Standard Rabies Immune Globulin (SRIG; Laboratory of Standards and Testing, Food and Drug Administration, USA) diluted to 2 IU mL−1. Sera were screened for RABV ribonucleoprotein (RNP) immunoglobulin M (IgM) and IgG antibodies by the indirect fluorescent antibody (IFA) test as described41 using affinity-purified, fluorescein-labeled goat anti-human antibodies (Kirkegaard & Perry Laboratories, Inc., Gaithersburg, MD). Sera were screened individually at dilutions of 1:4 to 1:128 with IgM or IgG preparations, respectively. A positive reaction at any of these dilutions was considered evidence of RABV-specific IgM or IgG antibodies. Separate analyses were conducted with survey data. The first analysis stratified all respondent data by exposure history to bats (i.e., with exposure defined as a bat bite or scratch or touching a bat with unprotected skin) to evaluate risk factors for bat exposure. The second analysis focused only on individuals from whom a serum sample was obtained and tested, with stratification of respondent data regarding bat exposure by serological status to evaluate risk factors for rabies virus exposure. All statistical analyses were performed using SAS v.9.3 (SAS Institute, Cary, NC). Fisher exact test was used to evaluate associations (α = 0.05) between the response (stratification) variable (i.e., bat exposure or serological status) and factors such as community of residence, age, sex, education level, and self-reported bat exposure history, which includes subcategories of bite, scratch, and/or touching a bat with unprotected skin. A total of 92 persons were interviewed from 51 households and represented a total community population of 316 persons (Table 1). The mean age of all respondents was 25 years (range = 2–67 years), and 55% (51 of 92) of respondents were male. Among respondents, 82% (62 of 76) of persons interviewed reported completing a primary education or less. Among households, 86% (44 of 51) owned pets or livestock, and 61% (27 of 44) of households owning pets or livestock reported that their animals were bitten by bats. Among the total community population, 23% (73 of 316) of persons had exposure to bats. Although a biased sample, among persons interviewed, 54% (50 of 92) reported being bitten by bats previously. Among the surveyed populations, several risk factors for exposure to bats were identified (Table 2). The most significant risk factor for exposure to bats was community of residence, with Santa Marta having a greater proportion of exposed persons (odds ratio [OR] = 8.71, P < 0.001). Other significant factors included age, with persons aged 25 years or younger having a greater risk of bat exposure (OR = 3.59, P = 0.038), and households reporting pets or livestock bitten by bats (OR = 8.83, P < 0.001). Furthermore, households with more than five family members (OR = 3.41, P = 0.03) were at higher risk of bat exposure. Contrastingly, persons who reported living at their house for less than 1 year were at a lower risk for bat exposure compared with other respondents (OR = 0.20, P = 0.02). Among 63 sera obtained from individual respondents (age range = 2–62 years, mean = 29 years; overall male to female ratio is 1.85), 11% (7 of 63) showed an rVNA titer (range = 0.1–2.8 IU mL−1). From the IFA test, RABV RNP IgM and IgG antibodies were detected from 4 of 63 samples (Table 3). The rVNA seroprevalence was lower in Truenococha (5%; 1 of 19) compared with Santa Marta (14%; 6 of 44), contrasting the trend in IFA antibody seroprevalence in Truenococha (11%; 2 of 19) and Santa Marta (5%; 2 of 44). Among seropositive respondents from either community, all (9 of 9) reported bat exposure, which was defined as a bat bite, scratch, or direct contact with unprotected skin. Furthermore, 75% (6 of 8) of unvaccinated seropositive respondents reported a history of a bat bite (Table 3). Only one seropositive respondent reported having received rabies PEP, although vaccination history details could not be elicited from two other seropositive respondents. Seropositive status of an individual was associated with age, with persons aged 29 years or less being at significantly lower risk of being seropositive (OR = 0.08, P = 0.01) (Table 4). Seropositive status was not associated with community of residence, gender, or education level. Although a greater proportion of seropositive persons reported bat exposure (9 of 9; 100%), including bat bite (7 of 9; 78%) or touching a bat (5 of 9; 56%), the differences were not significant compared with the reported bat esposure of seronegative persons (36 of 48; 75%), including bat bite (31 of 48; 65%) or touching a bat (16 of 48; 33%) (Table 4). Despite a wealth of studies documenting natural seroprevalence among wildlife reservoirs, few prior studies have reported natural human seroprevalence to RABV. One study showed rVNA among 7% (2 of 30) of sera from raccoon hunters in Florida, although at low titers (~0.1 IU mL−1).42 Another study, among Canadian Inuit hunters having animal contact but no vaccination history for RABV, also detected rVNA in 29% (9 of 31) of individuals.43 However, titers in that study were also uniformly low (< 0.1 IU mL−1). A later study among fox trappers in Alaska reported rVNA among 12% (3 of 26) of individuals.44 Two of three seropositive trappers had a previous vaccination history. The single seropositive Alaska fox trapper who had not received rabies vaccine previously had a high rVNA titer (2.3 IU mL−1), perhaps associated with a 47-year history of trapping and skinning foxes (without personal protective equipment) and a cumulative harvest of over 3,000 foxes. During a human rabies outbreak investigation in the Department of Amazonas in Perú in 1990, 17% (8 of 48) of persons in two affected communities were seropositive for rVNA, one of whom later died.14 In the study by Lopez and others,14 the median rVNA titer among the seven surviving persons was 0.18 IU mL−1 (range = 0.14–0.66 IU mL−1), whereas the person who died had a titer of 7.6 IU mL−1 at the time of sampling. Because the study by Lopez and others14 did not detect a statistical relationship relating to age of the individuals or exposure to bats with antibody concentration, all of the positive rVNA titers among the seven survivors were considered to be nonspecific. Despite potential for low-titer, false-positive neutralizing antibody titers resulting from nonspecific inhibition of virus growth, a recent review did not suggest evidence of nonspecific inhibition of virus growth at serum dilutions of 1:25 or greater in serological neutralization assays, although they were based on observations among non-indigenous persons.45 In the current study, a 50% reduction of fluorescing fields at a 1:25 serum dilution would have resulted in a titer of 0.2 IU mL−1, and given that six of seven rVNA titers were greater than 0.2 IU mL−1, these data do not suggest a high potential for nonspecific inhibition. The single respondent with an rVNA titer below 0.2 IU mL−1 (and RNP IgG titer of 1:8) in this study also reported a history of vaccination (Table 1), which is a more parsimonious explanation for her seropositive status. It is also noteworthy that none of the respondents in either community (seropositive or otherwise) reported the preparation or consumption of bats as a food source. The observation of unvaccinated seropositive respondents, in the context of a self-reported history of bat bites in an area endemic for vampire bat rabies, suggests that RABV exposure is not invariably fatal in humans. A genetic basis for susceptibility and immunological response to rabies has been shown previously in mice.46–48 Although it is possible that certain isolated and remote populations in the Amazon region may be genetically and immunologically unique,49,50 two studies have also found signatures of gender-specific genetic admixture in certain Amazon populations and suggest that historical social policies have strongly influenced the migration of persons to and genetic mixing within the Amazon region.49,51 Genetic comparisons of immunological markers and relevant inducible responses (e.g., humoral and cellular response to rabies vaccination)52 from populations in urban areas and throughout the Amazon region of Perú may shed additional light on whether certain indigenous populations show evidence of natural selection for enhanced nonspecific or specific immunological responses and genetic resistance to rabies infection. Individual immune response to natural infection with RABV may include virus-specific binding and neutralizing antibodies depending on factors such as viral dose, degree of replication in the periphery, and successful entry and replication in the central nervous system (CNS). Both RABV antibodies to the glycoprotein and RNP have a proven role in the immune response after vaccination.53,54 Based on patient histories in the United States, RABV RNP binding antibodies are typically detected first by IFA in response to clinical infection, and rVNA may or may not be induced.38 These observations suggest an early response of antibodies to RNP relative to the induction of rVNA during CNS infection. Based on the degree of peripheral replication, there may be infected cells and budding of intact virions or apoptosis of infected cells presenting RNP. Although the data presented in this study cannot conclusively differentiate between scenarios of abortive peripheral viral infection or clearance of a small viral dose insufficient to establish infection, the seropositive responses show exposure to RABV in the absence of vaccination. The presence of rVNA in unvaccinated subjects implies prior viral exposure but not necessarily viral replication, which can be shown by the induction of rVNA responses to even a single dose of inactivated rabies vaccine.55 However, given that rabies vaccination is accomplished with large doses of purified inactivated RABV virions, it remains unclear whether replication is a prerequisite for induction of humoral or cellular responses to natural exposures involving smaller doses of street RABV. In an experimental infection of bats with varying doses of RABV, low-dose RABV exposures did not lead to productive CNS infection, and apparently, they were cleared by an immune response in the periphery.56 Previous studies have shown that RABV-specific antibodies are not uniformly induced in the serum or cerebrospinal fluid (CSF) of clinical human rabies cases who do not receive rabies vaccine or immune globulin treatment, with greater probabilities of serological detection in patients with longer morbidity periods (i.e., days alive after onset of clinical symptoms).57–59 This report identifies a higher risk for bat exposure among young persons, despite finding a greater risk of rabies virus exposure (i.e., seropositive status) among older persons. It is plausible that multiple low-dose RABV exposures are needed to induce the rVNA responses observed in this study, consistent with the observed correlation of seropositive status with age. Evidence of RABV-specific antibodies in serum and CSF of subjects who did not receive rabies vaccine or immune globulin has been interpreted as evidence of viral replication and an abortive infection.33,38 The data in this study are inconclusive with regard to abortive infection in the seropositive respondents, because CSF samples were not collected, thus precluding evidence of RABV invasion into the CNS. Responses to interview questions about prior or current illness (and associated symptoms) did not support a history of CNS infection among respondents in this study. Innate immunity is typically another important component in combating viral infections. Prior studies have suggested that street RABVs tend to evade induction of the host innate immune response and particularly, interferon and inflammatory pathways.60 This finding is consistent with the observations of an inverse relationship between RABV virulence and the degree of viral replication, with highly pathogenic RABVs showing limited levels of replication, G protein accumulation, and apoptotic signaling in infected neurons.61,62 Although one study showed that a bat (street) RABV replicates efficiently in nonneuronal cells,63 it has been suggested that limited replication in the periphery may be an adaptation of highly neuroinvasive street RABVs to minimize a peripheral host immune response in vivo, thus facilitating entry into the CNS.61 Minimal immune induction in the periphery may be expected under scenarios of successful street RABV infection (i.e., CNS invasion); however, none of the respondents reported symptoms suggestive of CNS involvement. Rather than invoking peripheral viral replication as a requisite to the induction of rabies-specific serum antibody, one could also consider a dirty bite hypothesis. Little is known about (1) the population of RABV particles transmitted in the saliva during an animal bite and (2) what other substances or organisms may also be present. It is unrealistic to assume that homogenous populations of completely intact RABV virions are passed in the saliva, particularly given reports of defective interference (DI) particles.64 Furthermore, it cannot be ruled out that there are other properties or normal flora organisms associated with saliva from an animal bite that contribute to induction of a nonspecific innate and inflammatory immune response to the wound in the absence of peripheral RABV replication. These data highlight important complexities concerning the interpretation of serology, which is currently the only diagnostic tool that has been successful in antemortem diagnosis of nonfatal cases of human rabies infection. Prior vaccination history could confound the interpretation of the serological data in this study. Human rabies cell culture and nerve tissue vaccines are inactivated and do not replicate in recipients,65 but they induce robust rVNA responses.66,67 Equivocal evidence has been published regarding induction of non-neutralizing RNP antibody after rabies vaccination.59,68,69 It is notable that suckling mouse brain vaccine (SMB) is used in Perú for rabies PreEP and PEP, although PreEP is typically restricted to persons at occupational risk of infection. Only one seropositive respondent reported receiving rabies PEP. Data were unclear regarding self-reported prophylaxis among two other respondents. Given the remote location of these villages, our collaboration with personnel from the nearest health post that would have administered PEP during an intervention and the vaccination history reporting among other respondents, it is unlikely that the other eight seropositive respondents received rabies PreEP or PEP. Furthermore, persons living in this region often do not understand the real significance of being bitten by a vampire bat and are unlikely to seek medical assistance after a bite or may actively avoid modern medical care because of traditional beliefs.70 Prior reports of human rabies outbreaks in the Amazon, including 11 cases in the Department of Loreto in Perú in 1995, the results of this study, and nearby recent vampire bat-associated outbreaks suggest a high rabies risk in the Peruvian Amazon (Figure 1).12,15 Vampire bats principally feed on cattle or other mammals (including humans) when livestock is not widely available.71,72 Seasonal incidence of RABV infections from vampire bats to humans and cattle occurs purportedly shortly after the onset of the rainy season.14,73,74 However, reports of outbreaks during the dry season have also exist.75,76 Regardless of season, several reports indicate stronger coincidence of vampire bat depredation on humans after the elimination of livestock, such as pigs or cattle.14,74,77 In the current study, despite the observation that nearly equal proportions of exposed and non-exposed respondents reported owning pets or livestock (Table 2), exposed persons were more likely to report that their pets or livestock had been bitten by bats, presumably with greater risk when the bitten animals are confined close to one's residence. Interestingly, Santa Marta respondents exhibited nearly a ninefold greater risk for bat exposure, which may be non-exclusively influenced by other identified risk factors in the survey, including a greater proportion of younger persons and greater proportion of pets or livestock bitten in Santa Marta (Table 1). However, other factors not captured in the survey may also contribute to this observation, such as the absence of cattle in Santa Marta, asymmetry in the proximity of bat roosts to these communities, or some other unique ecological feature, although it is relevant to note that a greater proportion of Truenococha respondents reported entering a bat refuge (Table 1). Greater household size also contributed to increased risk for bat exposure, and it is possible that larger families have a greater proportion of young children, leading to greater risk of bat exposure. It is clear that there are a variety of factors that can influence individual and household risk of bat exposure in this region. Greater replication and geographic representation of communities in the Amazon would identify the most robust factors that contribute to geospatial variation in risk for bat and RABV exposure. Through evidence presented in this study and one earlier report,14 it is evident that a substantial fraction of the human population living in remote areas of the Peruvian Amazon experiences regular depredation by vampire bats and likely exposure to RABV. Seasonal patterns of vampire bat depredation in this region of the Amazon have not been well-characterized, and the majority of persons interviewed did not identify any apparent seasonality. Regardless, it is plausible that some individuals experience nonfatal exposure to RABV by vampire bat bites, with subsequent exposures leading to an immunological boost or anamnestic response. Regardless, all persons living in these communities should be considered for rabies prophylaxis as part of any subsequent intervention. New paradigms, such as rabies PreEP for Amazon populations at risk, may be necessary to prevent and control rabies in such unique ecological circumstances. In closing, it is relevant to recognize that the number of newly discovered lyssaviruses has increased significantly in recent decades. Pre-1980, traditional nomenclature recognized just four Lyssavirus genotypes (i.e., RABV, Lagos bat virus [LBV], Duvenhage virus [DUVV], and Mokola virus [MOKV]), whereas there are now 12 recognized species within the genus, 11 of which are presumed to have bats as the primary reservoir host.78 Although earlier studies questioned the pathogenicity of certain subsets of lyssaviruses, namely the phylogroup 2 viruses (e.g., LBV and MOKV),79 experimental studies have shown that phylogroups 1 and 2 lyssaviruses are pathogenic in animal models, including bats,80–83 and human infections with phylogroups 1 and 2 lyssaviruses are reviewed in the work by Banyard and others.84 The absence of human infections linked to certain lyssaviruses need not be misinterpreted as variation in pathogenicity of those lyssaviruses for three reasons: (1) a near absence of any systematic surveillance system for reporting and detecting human rabies infections in many parts of the world where these viruses are endemic,85 (2) the overwhelming burden of canine-associated human RABV infections in many of these same places (i.e., throughout Africa and Central and Southeast Asia), which could obscure less frequent bat-associated infections,1 and (3) the excellent sensitivity of the current gold standard fluorescent antibody test to detect any lyssavirus infection but inability of this test to type the specific lyssavirus implicated in infection. Although it is likely that there are undiscovered lyssaviruses in the Old World (figure 1b in the work by Rupprecht and others78), RABV is the only lyssavirus known to be present in the New World. After the advent and transfer of technologies such as monoclonal antibody typing and nucleic acid detection and sequencing methods throughout the Americas and along with regional campaigns for canine rabies elimination in the Americas and increased characterization of human rabies infections to achieve this goal, there still have not been any new lyssaviruses discovered in the Americas.5,11,86,87 For these reasons, a hypothesis that the results in this study reflect cross-reactivity to an undiscovered lyssavirus is one for which evidence is lacking but is also not an explanation that can be ruled out. However, hypotheses that there are undiscovered and non-pathogenic lyssaviruses that could be implicated as a mechanism for non-lethal rabies exposure showed in this study seem unsubstantiated based on evidence of lyssavirus pathogenicity in humans and animals worldwide. We are grateful for the participation of all respondents in the study. The authors thank Ivan Vargas, Jose Peña, Juan Ramon Meza, and the San Lorenzo Ministry of Health post for technical assistance in the field. They also thank Carolina Guevara from the Virology Department of Naval Medical Research Unit-6 for technical assistance. The authors thank James Ellison for technical assistance in the laboratory and Xianfu Wu for insightful discussion. The authors thank two anonymous reviewers for constructive comments that improved this manuscript. Disclaimer: Use of trade names and commercial sources is for identification only and does not imply endorsement by the US Department of Health and Human Services. The views expressed in this article are the views of the authors and do not necessarily reflect the official policy or position of the Ministry of Health of Perú, Centers of Disease Control and Prevention, Department of the Navy, Department of Defense, or the US Government. None of the authors have a financial or personal conflict of interest related to this study. The corresponding author had full access to all data in the study and final responsibility for the decision to submit this publication. Some of the authors are employees of the US government. This work was prepared as part of their official duties. Title 17 USC §105 states that “copyright protection under this title is not available for any work of the United States Government.” Title 17 U.S.C. §101 defines a US Government work as a work prepared by a military service member or employee of the US Government as part of that person's official duties. Financial support: Funding for the study was provided by a collaborative Centers for Disease Control and Prevention–University of Georgia Seed Award. Authors' addresses: Amy Gilbert, Brett Petersen, Sergio Recuenco, Michael Niezgoda, and Charles Rupprecht, National Center for Emerging and Zoonotic Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA, E-mails: fcj6/at/cdc.gov, ige3/at/cdc.gov, fni9/at/cdc.gov, man6/at/cdc.gov, and cyr5/at/cdc.gov. Jorge Gómez, Dirección General de Epidemiología, Ministerio de Salud, Lima, Perú, E-mail: jgomez/at/dge.gob.pe. V. Alberto Laguna-Torres, Virology Department, US Naval Medical Research Unit 6, Lima, Perú, E-mail: alberto.laguna/at/med.navy.mil.
<urn:uuid:b4f31796-b124-41c7-acdf-c6d480376440>
seed
Volume 10, Number 12—December 2004 Risk Factors for Alveolar Echinococcosis in Humans We conducted a case-control study to investigate risk factors for acquiring autochthonous alveolar echinococcosis in Germany. Forty cases and 120 controls matched by age and residence were interviewed. Patients were more likely than controls to have owned dogs that killed game (odds ratio [OR] = 18.0), lived in a farmhouse (OR = 6.4), owned dogs that roamed outdoors unattended (OR = 6.1), collected wood (OR = 4.7), been farmers (OR = 4.7), chewed grass (OR = 4.4), lived in a dwelling close to fields (OR = 3.0), gone into forests for vocational reasons (OR = 2.8), grown leaf or root vegetables (OR = 2.5), owned cats that roamed outdoors unattended (OR = 2.3), and eaten unwashed strawberries (OR = 2.2). Sixty-five percent of cases were attributable to farming. Measures that prevent accidental swallowing of possibly contaminated material during farming or adequate deworming of pet animals might reduce the risk for alveolar echinococcosis. Human alveolar echinococcosis is caused by the larval stage (metacestode) of the fox tapeworm Echinococcus multilocularis, which usually develops in the liver of infected persons. Slow larval growth results in an asymptomatic phase of several years before diagnosis. When left untreated, the condition is lethal (1). Although modern treatments have considerably improved survival, complete cure is rare (2–4). E. multilocularis occurs worldwide in many arctic and temperate zones of the northern hemisphere (5). Its life cycle is predominantly sylvatic; several carnivorous species such as the fox, wolf, and coyote serve as definitive hosts that excrete the eggs in their feces. Several small rodent species, such as the vole, lemming, and muskrat, serve as intermediate hosts; they become infected by oral intake of the eggs, and the larvae develop in their liver. In Europe, the red fox (Vulpes vulpes) is the main definitive host. In Germany, the parasite is endemic in many regions; the prevalence in red fox populations is <1%–>60% (6). Dogs and cats can also become infected as definitive hosts, but their infection rates are low (7). Human infections follow accidental ingestion of infective eggs. From 1982 to 2000, a total of 126 alveolar echinococcosis patients with autochthonous infections reportedly received treatment in German clinics (8). In spite of these low case numbers, the disease is an important public health problem because of the high frequency of infections in specific geographic clusters (8), the severity of organ damage in cases of infiltrative parasitic growth or hematogenous spread, and the necessity for costly long-term treatment and follow-up (3). Current hypotheses of possible routes of transmission of the eggs to humans include infection by hands contaminated from the fur of infected animals (foxes, dogs, or cats) or from soil while gardening or during field work; eating contaminated uncooked food from fields or gardens; drinking contaminated spring water; or inhaling dust containing tapeworm eggs, possibly during field work. Only three published case-control studies have assessed risk factors for human infections. In Alaska, dog ownership, living in houses directly built on the tundra, and keeping dogs tethered near the house were identified as important risks (9). In Austria, cat ownership and hunting were associated with alveolar echinococcosis, while farming and dog ownership were not (10). In Japan, persons with a clinical diagnosis of alveolar echinococcosis or a positive serologic result were more likely than controls to have reared cattle or pigs or to have used well water (11). We investigated possible risk factors for acquiring human alveolar echinococcosis in Germany. We conducted a matched case-control study. Cases were selected from the European Echinococcosis Registry at the University of Ulm (8). An eligible case-patient was defined as a person 1) with positive histopathology of alveolar echinococcosis, or with positive morphologic findings by imaging techniques (ultrasound, computer tomography, magnetic resonance imaging) compatible with alveolar echinococcosis with or without serologic findings for the disease, 2) who was first diagnosed from 1990 to 2000, and 3) who lived in Germany and was still alive. The time frame of diagnosis was restricted to reduce possible recall bias, and only live patients were included to avoid possible information bias introduced by interviewing the relatives of cases. Fifty-three patients were eligible; of these, 40 participated in the study, 11 refused participation, and 2 gave consent after the study was completed. Controls were individually matched to case-patients by age and place of residence. Matched residences were those in which the patients had lived during the 10 years before diagnosis. If the patients had moved during this time, the residence where they had lived for the longest period was chosen. Potential controls were contacted by random-digit telephone dialing. For every residence (5-digit postal code), 100 randomly selected numbers from the electronic version of the telephone directory were provided by ZUMA (Centre for Survey Research and Methodology, Mannheim, Germany). Eligible controls were persons who had lived in the municipality during the same period as the patients for at least 1 year, and who were of the same age (±5 years). Three controls for each of the 40 cases were chosen to detect an odds ratio of 3.0, assuming a frequency of a single exposure of 20% among controls and 43% among patients, with a power of 80% and a two-sided significance level of 5%. Exposure information was obtained with a standardized questionnaire administered by telephone from February to August 2000. Specific behavior and activities during the 10 years preceding the diagnosis of an individual case were assessed. Persons were considered to be dog or cat owners or to have farmed if the duration exceeded 1 year. Deworming of dogs and cats was rated as an effective prophylactic anthelmintic measure only when performed at monthly intervals. For dwellings, gardens, and meadows, a close vicinity to possibly contaminated areas was defined as being <100 m distance from meadows, forests, fields, or rivers. Eligible patients were asked for their written informed consent; controls were asked for their oral informed consent before the interview. All data were processed without personal identifiers. The ethical committee of the University of Ulm approved the study protocol. Statistical data were analyzed with SAS Version 8.2 (SAS Institute Inc., Cary, NC). For variables that might influence the occurrence of alveolar echinococcosis, the crude odds ratio (OR), the 95% confidence interval (CI) and the p value were calculated from simple conditional logistic regression (12). For each risk factor with a p value <0.05, the attributable risk was calculated by multiplying the proportion of exposed among the cases by (OR–1)/OR. All exposure factors strongly associated with the disease (p values <0.05) and independent of each other (Cramers’s V <0.5) were combined in a specific risk score. Of factors with high interdependencies, only one was chosen according to its contextual relevance as compared to the other variables. The score was computed for each participant by adding 1 point for each specific exposure when this factor was present. The distribution of the score points in cases and controls was described by a boxplot. A stratified analysis for farmers and nonfarmers was performed; the regression models, including interaction terms of farming with the 10 other exposure factors of the risk score, showed that the effects of none of the factors were clearly distinct between the two groups (results not shown). Forty cases and 120 controls took part in the study. The gender distribution differed between patients and controls. Only 22% of the patients were <50 years of age, 73% were 50–79, and 5% were >79 (range 15–82 years). The educational status was similar among patients and controls. Most study participants lived in small villages (Table), and most in southern Germany, only 10% lived in central and northern Germany. Of the patients, 36 had lived in these places for >20 years; 4 had moved during the possible exposure time. Simple conditional logistic regression analyses indicated 22 possible risk factors that were more common among patients than controls (p values <0.05) (Appendix Table 1 ). Patients were more likely than controls to have owned dogs (OR = 4.2), and several characteristics, such as leaving the dog in the garden unattended (OR = 6.1) or killing game (OR = 18.0), were more common among dogs belonging to patients (Appendix Table 1 ). Patients were also more likely to have dewormed their dogs at infrequent intervals (OR = 5.6). Six persons in the study population reported hunting; one patient and two controls had hunted foxes, all for long periods (18–45 years). Owning cats that roamed outdoors unattended (OR = 2.3) and cats that ate mice (OR = 2.3) were more common factors for patients than controls. Patients were more likely to be farmers (OR = 4.7); attributable risk calculations suggested that farming could account for almost two thirds of the infections. Specific farming activities were more common among patients than controls (Appendix Table 1 ). Of all garden-related activities, only growing leaf or root vegetables was more common among patients (OR = 2.5). The location of the garden showed no remarkable influence. Patients were also more likely to enter forests for vocational reasons than were controls (OR = 2.8) and were more likely to have collected wood (OR = 4.7). Eating unwashed or uncooked vegetables, salads, herbs, berries, or mushrooms did not appear to be an important risk factor for alveolar echinococcosis; only eating unwashed strawberries or chewing grass was more common among patients than controls (OR = 2.2 and 4.4, respectively), and attributable risk calculations suggested these exposures could at most account for only a quarter of the overall risk for alveolar echinococcosis (Appendix Table 1 ). Drinking water from natural sources had no identifiable association with the disease. In order to describe, simply, persons at risk among the study population, a specific risk score was derived from the 22 factors with p values <0.05; we chose only those factors with low interdependencies. Eight of the 22 factors were not strongly associated with any of the other variables (Appendix Table 2). The remaining variables with high interdependencies were selected as follows: living in a farmhouse was chosen instead of haymaking since including a three-level variable would have required weighing this factor; leaving the dog in the garden unattended was favored instead of six other dog-related factors (dog ownership, allowing the dog into the house, playing with the dog, walking the dog without leash, having a dog that ate mice, infrequent deworming of the dog) since it was a more relevant risk than dog ownership alone and was more reliably observed by the owners than the other factors. Cats left outdoors unattended was chosen as a risk factor instead of cats eating mice for the same reasons; being a farmer was chosen since it best represents the factors with which it was correlated (working in fields, pastures, grain fields). Thus, the score was composed of 11 variables (Appendix, Table 1) marked with an owning dogs that kill game, living in a farmhouse, owning dogs that roam outdoors unattended, collecting wood, being a farmer, chewing grass, living in a dwelling close to fields, going into forests for vocational reasons, growing leaf or root vegetables, owning cats that roam outdoors unattended, and eating unwashed strawberries. The score (range 0–11 points) was computed for 141 participants (37 patients, 104 controls); 19 participants had missing values in at least 1 exposure factor. The distribution among patients had a median of 6 score points (range 2–10); the distribution among controls had a median of 3 score points (range 0–9) (Figure). Of the patients, 81% had score values >4, but only 39% of the controls had score values >4. This study identified several possible important risk factors for acquiring alveolar echinococcosis. Farming was perhaps the most important risk factor identified; more than three quarters of patients were farmers, and attributable risk calculations suggested that almost two thirds of the cases could be accounted for by farming. The apparent risk with farming supports the view that substantial environmental contamination can be expected in open areas. The parasite’s eggs can survive and remain infective for months under favorable conditions (high humidity, low temperatures) (13); thus, soil-related exposures are plausible. The finding that haymaking in meadows adjacent to streams or rivers bears a higher risk than haymaking in other areas agrees with the finding that more infected foxes are found close to water than in other habitats (14). Although farming was an important risk factor, having a garden was not. An explanation may be that gardens usually cover a small area, and working in a garden requires less time, thus reducing exposure. Growing leaf or root vegetables was the only garden-related risk factor for alveolar echinococcosis. The risk potential of growing specific garden produce may be interpreted in light of the greater amount of care and activity required for annual plants (leaf or root vegetables, salad vegetables), the fact that they are usually grown on larger patches than perennial herbs and strawberries, and the intense soil contact that occurs during harvesting. Pet animals might pose a risk because of their close contact to humans and their contamination of soil around houses and in gardens. We found an association of dog ownership with acquisition of alveolar echinococcosis, and a lower but still relevant relationship with owning cats that roam freely outdoors or eat mice. The factor with the strongest association with the disease was “dogs that killed game,” which is a rare disobedient behavior of individual dogs. Therefore, the attributable risk was lower than for the other variables related to dog ownership. Several other studies have indicated that dogs and cats are important risk factors for alveolar echinococcosis, although findings have been inconsistent (9,10). In China, an extensive inquiry with >2,500 participants including 86 patients with alveolar echinococcosis found that the number of dogs owned over time and the degree of dog contact were the most important risk factors (15). Our results attach greater importance to ownership of dogs than of cats, particularly when the dog had activities possibly resulting in increased contact with soil or game. This finding is supported by experimental infection studies in which dogs proved to be susceptible to E. multilocularis eggs to the same high extent as foxes, and high worm loads developed; by contrast, cats had lower susceptibility and a slower maturation of the parasite (6,16). In the light of these findings, dogs and cats likely become a risk factor mainly by being infected themselves, in addition to transferring the eggs from fox feces or soil in their fur. Natural infections of pets have rarely been investigated systematically. The largest study on live cats and dogs from disease-endemic areas found coproantigen rates of 0.8% for both species (17). In Austria, a strong association was found between hunting and the risk of acquiring alveolar echinococcosis (OR = 7.8) (10); however, a similar association was not shown in our study. Only 1 of 40 patients reported hunting. In Alaska, where hunting activities are more frequent than in Germany, no association between hunting and alveolar echinococcosis was observed (9). In China, no association was found between fox hunting and the disease (15). Of all activities in the woods, only collecting wood was a likely important risk factor for alveolar echinococcosis, as indicated by the high OR and attributable risk calculations. Possibly, collecting wood posed a risk through contact with contaminated soil when a person picked the wood up from the ground, or the wood itself became contaminated when stacked in places accessible to roaming animals (clearings, forest perimeters, exterior parts of walls, open barns). Chewing grass and eating unwashed strawberries were the only two variables of food consumption associated with alveolar echinococcosis. This risk may be attributable to ingestion of eggs from contaminated plant parts or from soil-contaminated hands. Other garden produce and mushrooms from fields and meadows were only rarely consumed raw and unwashed. Berries from the woods were more frequently consumed raw and unwashed than strawberries. The reasons why only strawberries constitute a risk include the fact that forest areas may be less likely to be egg-contaminated or that strawberries are eaten in larger quantities. The two case-control studies of Alaska and Austria found no association of alveolar echinococcosis with picking and eating raw produce from gardens, or berries and mushrooms from fields and forests (9,10). This study had several important limitations. First, the long latent period for alveolar echinococcosis precluded determining the exact period relevant for an exposure. We restricted the assessment of most variables to the 10 years preceding the diagnosis of a case; we also restricted eligibility to diagnoses since 1990, which had the advantage that diagnoses were probably ascertained “early” after the patients’ infection owing to improved diagnostic technology and greater awareness over time. The case-control studies on alveolar echinococcosis published previously included cases irrespective of diagnosis dates. Furthermore, in Austria, the observation period spanned the 20 years preceding diagnosis, and the study included data about deceased patients (10). In Alaska the time frame encompassed the whole lifetime of the participants (9). Second, many possible risk factors were correlated with each other, and eliminating possible confounding factors was not possible. In our analyses, we omitted multiple logistic regression because of the multicollinearity of the factors. In such a situation, variable selection procedures in multiple logistic regression might lead to the arbitrary removal of important factors from the final model. In our opinion, interpreting such a reduced risk model might be misleading, especially if recommendations for preventive measures were derived from these models alone. Instead, we considered different degrees of exposure between cases and controls. We constructed an unweighted risk score from high risk variables that were not strongly dependent on each other. Patients were more likely to have been exposed to a greater variety of potential risks during the defined exposure time, which speaks for a possible cumulative effect of potentially hazardous activities. Third, the matching of case-patients and controls by location could have selected for similar behavior among them, and thus falsely reduced the observed strength of associations of possible risk factors. We conclude that farmers, compared to persons in other occupations, are at high risk for alveolar echinococcosis in endemic areas in Germany. The disease should be strongly suspected in farmers living in these areas who have symptoms suggestive of this disease. Since no single farming-related activity alone likely accounts for this risk, general measures to reduce possible exposure during farming (e.g., wearing gloves when handling soil, plants, or wood; washing hands before taking meals after farming) might best reduce this risk. The risk observed with haymaking suggests a need to evaluate a possible role of inhalation; although evidence is lacking, wearing protective masks in very dusty conditions during such work may minimize risk. Our data also suggest that dogs and cats may pose a risk and that an adequate anthelmintic prophylaxis (praziquantel at monthly intervals) may possibly reduce this risk. Finally, our data suggest that cleaning produce from fields or gardens may help to reduce the risk for this disease. Until the early 1980s, human alveolar echinococcosis was known to occur in four countries of western and central Europe: Austria, France, Germany, and Switzerland (5). Since the 1990s, sporadic cases have been found in Belgium, Poland, and Greece (8); a first case report from Slovakia dates from 2000 (18). These cases suggest that the disease is spreading. Since eliminating the parasite is unfeasible, the population in the disease-endemic areas should be advised to adhere to personal cautionary measures to prevent new infections. Dr. Kern is a research assistant at the Department of Biometry and Medical Documentation at the University of Ulm, Germany. She is responsible for the data collection of human cases of alveolar echinococcosis, data control, and analysis in the European Echinococcosis Registry. We thank the patients and controls who voluntarily participated in the study. The work of the European Echinococcosis Registry is financially supported by the University of Ulm, the Paul-Ehrlich-Gesellschaft e.V., and GlaxoSmithKline GmbH&Co. KG, Munich. - Ammann RW, Eckert J. Cestodes, Echinococcus. Gastroenterol Clin North Am. 1996;25:655–89. - Bresson-Hadni S, Vuitton DA, Bartholomot B, Heyd B, Godart D, Meyer JP, A twenty-year history of alveolar echinococcosis: analysis of a series of 117 patients from eastern France. Eur J Gastroenterol Hepatol. 2000;12:327–36. - Reuter S, Jensen B, Buttenschoen K, Kratzer W, Kern P. Benzimidazoles in the treatment of alveolar echinococcosis: a comparative study and review of the literature. J Antimicrob Chemother. 2000;46:451–6. - Ammann RW, Hirsbrunner R, Steiger U, Jacquier P, Eckert J. Recurrence rate after discontinuation of long-term mebendazole therapy in alveolar echinococcosis. Am J Trop Med Hyg. 1990;43:506–15. - Eckert J, Schantz PM, Gasser RB, Torgerson PR, Bessonov AS, Movsessian SO, Geographic distribution and prevalence. In: Eckert J, Gemmell MA, Meslin FX, Pawlowski ZS, editors. WHO/OIE manual on echinococcosis in humans and animals: a public health problem of global concern. Paris: The World Health Organization; 2001. p.100–42. - Eckert J, Rausch RL, Gemmell MA, Giraudoux P, Kamiya M, Liu FJ, Epidemiology of Echinococcus multilocularis, Echinococcus vogeli and Echinococcus oligarthrus. In: Eckert J, Gemmell MA, Meslin FX, Pawlowski ZS, editors. WHO/OIE manual on echinococcosis in humans and animals: a public health problem of global concern. Paris: The World Health Organization; 2001. p. 164–82. - EurEchinoReg European Network for concerted surveillance of Alveolar Echinococcosis. Final report to the European Commission - DGV (SOC 97 20239805F01). University de Franche-Comté: European Commission, Unite de Recherché; 1999. - Kern P, Bardonnet K, Renner E, Auer H, Pawlowski Z, Ammann RW, European Echinococcosis Registry: human alveolar echinococcosis, Europe, 1982– 2000. Emerg Infect Dis. 2003;9:343–9. - Stehr-Green JK, Steer-Green PA, Schantz PM, Wilson JF, Lanier A. Risk factors for infection with Echinococcus multilocularis in Alaska. Am J Trop Med Hyg. 1988;38:380–5. - Kreidl P, Allersberger F, Judmaier G, Auer H, Aspöck H, Hall AJ. Domestic pets as risk factor for alveolar hydatid disease in Austria. Am J Epidemiol. 1998;147:978–81. - Yamamoto N, Kishi R, Katakura Y, Miyake H. Risk factors for human alveolar echinococcosis: a case-control study in Hokkaido, Japan. Ann Trop Med Parasitol. 2001;95:689–96. - Breslow NE, Day NE. Statistical methods in cancer research. Vol 1: The analysis of case-control studies. IARC scientific publications No. 32. Lyon, France: Geneva: World Health Organization; 1980. - Veit P, Bilger B, Schad V, Schäfer J, Frank W, Lucius R. Influence of environmental factors on the infectivity of Echinococcus multilocularis eggs. Parasitology. 1995;110:79–86. - Staubach C, Thulke HH, Tackmann K, Hugh-Jones M, Conraths FJ. Geographic information system-aided analysis of factors associated with the spatial distribution of Echinococcus multilocularis infection of foxes. Am J Trop Med Hyg. 2001;65:943–8. - Craig PS, Giraudoux P, Shi D, Bartholomot B, Barnish G, Delattre P, An epidemiological and ecological study of human alveolar echinococcosis transmission in south Gansu, China. Acta Trop. 2000;77:167–77. - Thompson RCA, Deplazes P, Eckert J. Observations on the development of Echinococcus multilocularis in cats. J Parasitol. 2003;89:1086–8. - Deplazes P, Alther P, Tanner I, Thompson RCA, Eckert J. Echinococcus multilocularis coproantigen detection by enzyme linked immunosorbent assay in fox, dog, and cat populations. J Parasitol. 1999;85:115–21. - Kincekova J, Auer H, Reiterova K, Dubinsky P, Szilagyiova M, Lauko L, The first case of autochthonous human alveolar echinococcosis in the Slovak Republic (case report). Mitt Osterr Ges Tropenmed Parasitol. 2001;23:33–8. Suggested citation for this article: Kern K, Ammon A, Kron M, Sinn G, Sander S, Petersen LR, et al. Risk factors for alveolar echinococcosis in humans. Emerg Infect Dis [serial online] 2004 Dec [date cited]. Available from http://wwwnc.cdc.gov/eid/article/10/12/03-0773 - Page created: April 14, 2011 - Page last updated: April 14, 2011 - Page last reviewed: April 14, 2011 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:f5502507-1e56-41f8-af24-e67755eb90a0>
seed
Tapeworms are large, flat parasitic worms that live in the intestinal tracts of some animals. They are passed to humans who consume foods or water contaminated with tapeworm eggs or larvae. Six types of tapeworms are known to infect humans, usually identified by their source of infestation: beef, pork, dog, rodent, fish, and dwarf (named because it is small). There are often no symptoms as tapeworms grow in humans. Untreated cases can be life-threatening or lead to permanent tissue damage, but tapeworm infections confined to the intestines can easily be treated with medication. Tapeworms enter the human body with contaminated food or water and remain in the intestines. Tapeworm infection in people usually results from eating undercooked foods from infected animals. Pigs or cattle, for example, become infected when grazing in pastures or drinking contaminated water. The parasites mature in the animal’s intestines to pea-shaped larvae, and are transmitted to people who eat pork or beef. In addition, tapeworms can also be passed from hand-to-mouth contact, if you touch a contaminated surface and then touch your mouth. The following factors increase your chances of developing tapeworm infection. If you have any of these risk factors, tell your doctor: Eating raw or undercooked meats. Poor hygiene. Not washing often can increase the risk of transferring tapeworm parasite from hand-to-mouth. Exposure to livestock, particularly in areas where human and animal feces are not properly disposed. Travel to underdeveloped countries with poor sanitary conditions. If you experience any of these symptoms do not assume it is because you have a tapeworm. These symptoms may be caused by other, less serious health conditions. If you experience any one of them, see your physician. Hunger or loss of appetite You may be able to self-diagnose tapeworm infection by checking your stool for signs of tapeworms. But more likely, if you suspect infection, see your doctor, who will ask about your symptoms and medical history, and perform a physical exam. Tests may include the following: A stool sample that will be sent to a laboratory for analysis. Sometimes, several samples are needed over a designated period, since tapeworm eggs and segments may be released irregularly in human stool. A blood test to indicate the presence of antibodies produced to fight tapeworm infection scan—a type of x-ray that uses computers (CT) and magnetic waves (MRI) to make pictures of structures inside the body. These scans may be needed for serious cases in which the parasite might have infected other areas of your body beside the digestive tract. Tapeworm infection is treated with oral medication. Commonly used drugs include: These medications work by dissolving or attacking the adult tapeworm, but may not target eggs. Proper hygiene is essential to avoid re-infection; always wash your hands before eating or after going to the bathroom. Your doctor will check stool samples at one and three months after you've finished taking your medication. The success rate is greater than 95% in patients who receive appropriate treatment. To help reduce your chances of getting a tapeworm infection, take the following steps: Wash your hands with soap and hot water before eating or handling food Wash after using the toilet. Freezing meat or fish for varying lengths of time can kill tapeworm eggs or larvae. Thoroughly cook meat at temperatures of at least 150 degrees F; avoid raw fish. When traveling in undeveloped countries, wash and cook all fruits and vegetables with safe water before eating. Get prompt treatment for pets infected with tapeworm. Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a
<urn:uuid:b1a5d646-2c63-45a7-bf90-d2e03aa37f95>
seed
Meyers Primary Care Institute; Department of Medicine, Division of Rheumatology Medical Subject Headings Gout; Patient Education Health Services Research | Musculoskeletal Diseases | Rheumatology BACKGROUND: For patients to effectively manage gout, they need to be aware of the impact of diet, alcohol use, and medications on their condition. We sought to examine patients' knowledge and beliefs concerning gout and its treatment in order to identify barriers to optimal patient self-management. METHODS: We identified patients (>/=18 years of age) cared for in the setting of a multispecialty group practice with documentation of at least one health care encounter associated with a gout diagnosis during the period 2008-2009 (n = 1346). Patients were sent a questionnaire assessing knowledge with regard to gout, beliefs about prescription medications used to treat gout, and trust in the physician. Administrative electronic health records were used to identify prescription drug use and health care utilization. RESULTS: Two hundred and forty patients returned surveys out of the 500 contacted for participation. Most were male (80%), white (94%), and aged 65 and older (66%). Only 14 (6%) patients were treated by a rheumatologist. Only a minority of patients were aware of common foods known to trigger gout (e.g., seafood [23%], beef [22%], pork [7%], and beer [43%]). Of those receiving a urate-lowering medication, only 12% were aware of the short-term risks of worsening gout with initiation. These deficits were more common in those with active as compared to inactive gout. CONCLUSION: Knowledge deficits about dietary triggers and chronic medications were common, but worse in those with active gout. More attention is needed on patient education on gout and self-management training.
<urn:uuid:01c902b1-320a-4743-881e-a1010a438d6c>
seed
(PDF 220 KB) A Window Into Alzheimer’s Disease Researchers at the University College London’s (UCL) Institute of Ophthalmology have developed a technique that makes it possible to directly and noninvasively monitor the death of single retinal nerve cells in the eyes of live animals in real time. The potential of this technique reaches far beyond that of retinal health—whether it provides a diagnostic “canary in the coal mine” by signaling early death of nerve cells in the brain or offers a tool for evaluating the efficacy of neuroprotectants for Alzheimer’s disease. The UCL research team previously added to the growing body of data linking a wide range of neurodegenerative diseases to common triggers and a convergence of final pathways by providing evidence that the protein beta amyloid—a major component of Alzheimer’s brain plaques—is responsible for harm to the optic nerve. These and other connections prompted the team, led by M. Francesca Cordeiro, MD, PhD, and Stephen E. Moss, PhD, to broaden the focus of their current study beyond glaucoma. “An increasing number of studies have come out showing that the retina is affected by Alzheimer’s disease,” said Dr. Cordeiro, professor of glaucoma and neurodegeneration studies at University College London and an attending physician at the Western Eye Hospital in London. “But these studies all were confined to postmortem eyes. Ours was the first showing retinal activity in vivo.” Reporting in Cell Death & Disease, the researchers delineated a series of steps to observe retinal cells and monitor the stage and type of cell death.1 This involved the use of fluorescent cell-death markers that bind to specific cells on the retina of transgenic mice with aspects of Alzheimer’s disease. To track individual live cells over hours, days, weeks and months, the researchers used a customized confocal scanning laser ophthalmoscope—not far afield from those used clinically—to detect emission wavelengths from three fluorescent labels: annexin V positive only (to visualize early apoptosis), propidium iodide only (to visualize necrosis) and both annexin V and PI positive (to visualize late-phase apoptosis). “We combined the two markers [annexin V and PI positive] to give us an idea of the severity or activity of cell death and to differentiate between the different phases—whether complete death or an early stage with the capacity for reversal,” said Dr. Cordeiro. One important finding was that necrosis plays a key role in neurodegeneration, she said, meaning apoptosis is not an exclusive player in this process, as previously thought. By making it possible to stage neurodegeneration in real time with a simple eye test, these studies open the possibility for Alzheimer’s treatment during the narrow window of early apoptosis. They also help make the case for broadening the role of the general ophthalmologist, said Dr. Cordeiro. “Eventually, I think this will knock on the door of the neurologist a bit,” she said. Dr. Cordeiro and collaborators hope to extend these techniques to clinical trials in glaucoma patients later this year. 1 Cordeiro, M. F. et al. Cell Death Dis. Published online Jan. 14, 2010. ___________________________ Dr. Cordeiro is a named inventor on a patent application covering the technology disclosed in the Cell Death & Disease report. Donor Cell Density Can’t Predict PK Success Cornea transplant success after penetrating keratoplasty (PK) cannot be predicted by donor endothelial cell density, but it can be predicted by cell density six months postoperatively,1 according to the Cornea Donor Study Investigator Group, which previously found that donor age does not affect graft success. In the prospective cohort study, both the grafts that failed and those that remained clear at five years started out with similar median cell counts, 2,670 cells/mm2 and 2,687 cells/mm2, respectively. This finding should ease surgeons’ concerns about obtaining corneas with the highest number of cells, said lead investigator Jonathan H. Lass, MD, professor and chairman of ophthalmology and visual sciences at Case Western Reserve University and director, University Hospitals Eye Institute in Cleveland, Ohio. “Minimum count at most eye banks is about 2,000 cells/mm2 or above,” he said. “What this study is saying is, as long as you’re within the minimum, the baseline count didn’t make a difference.” Flash-forward to six months after surgery: Now cell count matters. The six-month endothelial cell density, and the change from baseline, were predictive of graft failure at five years of follow-up. By six months, the median endothelial cell density in the failed group was 1,774 cells/mm2. In the cases that did not fail, the median endothelial cell density was 2,514 cells/mm2. These findings have clear clinical implications. “If you have a low count at six months, watch those patients because they have a higher risk for failure,” Dr. Lass said. Monitor them more frequently, and if there’s a noticeable increase in corneal thickness between visits, get a cell count, he said. Interestingly, the study found that a graft can remain clear with an endothelial cell density below 500 cells/mm2. Success appeared related to the trajectory of cell loss, Dr. Lass said. “If your cornea has a stable population of cells that are functioning at a relatively low cell count, that graft can do well.” Now the NIH-funded study group is looking at factors that influence cell loss, such as length of time the cornea was in storage prior to surgery as well as recipient diagnosis. “Our main goal is to try to change practice patterns in how surgeons approach use of donor corneas,” Dr. Lass said. 1 Lass, J. H. et al. Arch Ophthalmol HISTORY : Egyptian Cosmetics Makeup may have been a key to eye health in ancient Egypt. Electrochemical data obtained from 52 makeup samples taken from ancient containers housed at the Louvre museum revealed that two lead chlorides were used in the manufacture of eye makeups and lotions. Considering our current understanding of lead toxicity, these findings are somewhat surprising. However, it is presumed that the application of these compounds led to the production of nitrogen monoxide molecules—a catalyst in the immune response, which could have offered the Egyptians some protection from bacterial eye diseases and inflammations common to the areas surrounding the marshy Nile River. Contact Lens News Antimicrobial Contact Lenses in the Pipeline As early as this month, an Australian research group could begin a 250-patient trial of one of the leading passive strategies for protecting contact lens wearers from bacteria on their silicone hydrogel lenses: coating the lenses with selenium. Unlike some other antibacterial add-ons being considered, the selenium does its work without being released into the eye, said microbiologist Mark Willcox, PhD, professor of optometry and vision science at the University of New South Wales in Sydney, Australia. Instead, the selenium is incorporated into an organic compound and covalently bonded to the lens. There, it causes the localized generation of superoxide free radicals that injure any bacteria present, preventing them from growing and adhering to the lens in a persistent biofilm.1 “The selenium acts on a local, microscopic scale in a way similar to that of hydrogen peroxide,” said Dr. Willcox, chief scientific officer at the Institute for Eye Research, where the selenium- coated contact lenses will be tested. “The smaller the cell, the nearer it can get to the surface of the lens and the more it is affected by the free radicals. Human cells of the cornea and conjunctiva aren’t affected.” So far, the research findings on selenium coating of contact lenses include: - In the lab, silicone hydrogel lenses that were covalently coated with selenium grew 1,000 times less Staphylococcus aureus than uncoated versions did. They also resisted colonization by Pseudomonas aeruginosa. - After 48 days of continuous wear by rabbits, eyes that wore the test lens showed no significant differences compared with control eyes in clinical signs, epithelial and corneal total thickness, corneal morphology and corneal histology. - In a controlled, randomized, contralateral human trial, 20 subjects wore selenium-coated and uncoated silicone hydrogel contact lenses for 24 hours. The researchers found no differences between the eyes in the patients’ subjective evaluations, bulbar and limbal redness, and corneal and conjunctival staining.2 - Despite 24 hours of continuous wear, the coated lenses retained their antibacterial properties. Dr. Willcox said that lab tests indicate that selenium might also make silicone hydrogel contact lenses resist colonization by the fungus Fusarium, one of the organisms behind the 2004 to 2007 outbreaks of microbial keratitis among contact lens wearers. It has not been tested yet against Acanthamoeba, another common cause of outbreaks. Researchers at the Sydney institute and elsewhere around the world have also been exploring other molecules that might be added to contact lenses to inhibit microbes. These include silver; polymeric quaternary ammonium compounds, to act as disinfectants; polymeric pyridinium compounds, to break bacterial cell walls; quorum-sensing compounds that interfere with bacterial signaling systems; nitric-oxide releasing polymers, which produce free radicals; and natural or synthetic peptides that cause microbial cell walls to leak. Although the goal is to prevent infectious keratitis, the initial trial of selenium-coated contact lenses will not give a definitive answer on this issue. The study’s size will limit researchers to tracking incidence of adverse events associated with infection risk, such as conjunctival inflammation and infiltrative keratitis. Nonetheless, as the first large clinical trial of antibacterial contact lenses, the trial will be an important milestone in an ongoing effort. Institute for Eye Research scientists began hunting for ways to make antimicrobial contact lenses in 1999, Dr. Willcox noted. “It’s taken us a long time to understand what causes these adverse events and what you can do to prevent them. So this is a big step for us,” he said. 1 Mathews, S. M. et al. Cornea 2 Ozkan, J. et al. Poster #5632/D941, Efficacy and clinical performance of selenium antibacterial silicone hydrogel contact lenses. Presented at ARVO, Thursday, May 7, 2009. Abstract available online at www.arvo.org . Choose “Meetings & Abstracts,” then “Search 2009 Annual Meeting Abstracts.” Search under “Program” for 5632. ___________________________ Dr. Willcox has consulted for and received research and travel grants from several ophthalmic companies, including Abbott Medical Optics, Alcon, Allergan and Ciba Vision. EyeNet thanks Christopher J. Rapuano, MD, for his help with this issue’s News in Review.
<urn:uuid:bcaf863f-76aa-44ce-89ee-16600ca54064>
seed
- frequent urination - sudden, strong urges to urinate but nothing comes out - problems starting a urine stream - painful urination - problems emptying the bladder completely - recurrent urinary tract infections Urodynamic tests are usually performed in Urology, Gynecology, OB/GYN, Internal medicine, and Primary care offices. Urodynamics will provide the physician with the information necessary to diagnose the cause and nature of a patient's incontinence, thus giving the best treatment options available. Urodynamics is typically conducted by urologists, urogynecologists, or specialist urology nurses. Purpose of testing The tests are most often arranged for men with enlarged prostate glands, and for women with incontinence that has either failed conservative treatment or requires surgery. Probably the most important group in whom these tests are performed are those with a neuropathy such as spinal injury. In some of these patients (dependent on the level of the lesion), the micturition reflex can be essentially out of control and the detrusor pressures generated can be life threatening. Symptoms reported by the patient are often an unreliable guide to the underlying dysfunction of the lower urinary tract. The purpose of urodynamics is to provide objective confirmation of the pathology that a patient's symptoms would suggest. For example, a patient complaining of urinary urgency (or rushing to the toilet), with increased frequency of urination can have overactive bladder syndrome. The cause of this might be detrusor overactivity, in which the bladder muscle (the detrusor) contracts unexpectedly during bladder filling. Urodynamics can be used to confirm the presence of detrusor overactivity, which may help guide treatment. An overactive detrusor can be associated with urge incontinence. These tests may be as simple as urinating behind a curtain while a doctor or nurse listens, but are usually more extensive in western medicine. A typical urodynamic test takes about 30 minutes to perform. It involves the use of a small catheter used to fill the bladder and record measurements. What is done depends on what the presenting problem is, but some of the common tests conducted are; - Post-void residual volume: Most tests begin with the insertion of a urinary catheter/transducer following complete bladder emptying by the patient. The urine volume is measured (this shows how efficiently the bladder empties). High volumes (180 ml) may be associated with urinary tract infections. A volume of greater than 50 ml in children has been described as constituting post-void residual urine. High levels can be associated with overflow incontinence. - The urine is often sent for microscopy and culture to check for infection. - Uroflowmetry: Free uroflowmetry measures how fast the patient can empty his/her bladder. Pressure uroflowmetry again measures the rate of voiding, but with simultaneous assessment of bladder and rectal pressures. It helps demonstrate the reasons for difficulty in voiding, for example bladder muscle weakness or obstruction of the bladder outflow. - Multichannel cystometry: measures the pressure in the rectum and in the bladder, using two pressure catheters, to deduce the presence of contractions of the bladder wall, during bladder filling, or during other provocative manoeuvres. The strength of the urethra can also be tested during this phase, using a cough or Valsalva manouvre, to confirm genuine stress incontinence. - Urethral pressure profilometry: measures strength of sphincter contraction. - Assessing the "tightness" along the length of the urethra. - Fluoroscopy (moving video x-rays) of the bladder and bladder neck during voiding. Notes on standardization Males with benign prostate hyperplasia are influenced by voiding position: in the sitting position, the PVR, Qmax and TQ were shown to improve. Apart from the possibilities in the management of this condition, it shows that measurements of urodynamics should be performed in a standardized position, else false-positive or negative findings may be found. - van Leijsen SA, Kluivers KB, Mol BW, et al. (2009). "Protocol for the value of urodynamics prior to stress incontinence surgery (VUSIS) study: a multicenter randomized controlled trial to assess the cost effectiveness of urodynamics in women with symptoms of stress urinary incontinence in whom surgical treatment is considered". BMC Women's Health 9: 22. doi:10.1186/1472-6874-9-22. PMC 2722584. PMID 19622153. - Truzzi JC, Almeida FM, Nunes EC, Sadi MV (July 2008). "Residual urinary volume and urinary tract infection--when are they linked?". J. Urol. 180 (1): 182–5. doi:10.1016/j.juro.2008.03.044. PMID 18499191. - Chang SJ, Yang SS (October 2009). "Variability, related factors and normal reference value of post-void residual urine in healthy kindergarteners". J. Urol. 182 (4 Suppl): 1933–8. doi:10.1016/j.juro.2009.02.086. PMID 19695621. - "loyola Univ. Health Sys. - Urology - Health Topics/Urodynamic Testing". - de Jong, Y; Pinckaers, JH; Ten Brinck, RM; Lycklama À Nijeholt, AA; Dekkers, OM (2014). "Urinating Standing versus Sitting: Position Is of Influence in Men with Prostate Enlargement. A Systematic Review and Meta-Analysis.". PloS one 9 (7): e101320. doi:10.1371/journal.pone.0101320. PMID 25051345.
<urn:uuid:24413185-788a-48d9-a752-4a0514c70eb5>
seed
Table of Contents Cancer is a varied group of diseases that have in common the uncontrolled proliferation and spread of abnormal cells, which may invade other tissues of the body. The different types of cancer vary not only in their site or tissue of origin, but also in other factors including cell morphology, rate of growth, and method of spread. Through July 26, 2006, the Indiana State Cancer Registry has recorded 152,823 cases of cancer that were newly diagnosed from January 1, 1999 through December 31, 2003. (Note: basal and squamous cell carcinomas of skin are not collected.) The incidence tables include 144,049 cancers, which includes 2,639 cases of in situ bladder cancer. All other in situ cancers (n = 8,774 are excluded from the incidence tables. The exclusion of in situ cases allows comparisons with national data and is based on the major differences in prognosis and treatment between in situ and invasive cancers. Interpreting the pathologist's description of invasion for urinary bladder tumors has proven difficult for coders and, since patients generally receive the same treatment for in situ and microinvasive tumors, in situ bladder tumors have traditionally been included in incidence rates. During the 5 year period (1999 - 2003) there were 63,936 cancer deaths of Indiana residents. Cancer incidence is a more accurate measure of the occurrence of cancer than cancer mortality because of the increasing survival and even cure for many cancer sites. Cancer incidence also is of greater value in the investigation of external risk factors for cancer, because the date of diagnosis occurs closer in time to the exposure which may have initiated or promoted the development of cancer. Cancer mortality is of greater importance in identifying potential disparities in screening and treatment. Cancer incidence increases with age, though cancer can occur in infants as well as the elderly. Section 3 presents age-specific incidence rates (Tables 3.1-5 to3.6-5) and age-specific mortality rates (Tables 3.1-6 to 3.6-6) for all invasive cancers (all sites) and for female breast, cervical, colon, lung, and prostate cancers. During 1999 - 2003, there were 715 new cases of invasive cancer in children less than 10 and an additional 752 cases in those ages 10 - 19. Only 1.0% of cancers occurred in children, whereas 57.6% occurred in those ages 65 and over. Those 70-74 years of age had the largest number of new diagnoses (21,223). The lowest incidence rate* for invasive cancer was in children ages 5-9 (11.0 per 100,000). The incidence rates increased steadily with age to a high of 2,414.7 for those 80-84 and then declined slightly for the population 85 and older. Cancer mortality also increases with age. During the 5-year period from 1999 - 2003, there were 107 deaths from cancer in children under age 10, and 121 cancer deaths in children ages 10 - 19. Only 0.4% of the total cancer deaths were of children (0 - 19 years of age). In contrast, 70.1% of the cancer deaths were of adults age 65 or older. Adults ages 75-79 had the largest number of deaths (10,432) for the 5-year period. The lowest cancer mortality rate was for 10-14 year-old children (2.1 per 100,000). Mortality rates increased with age to 1,783.2 per 100,000 for those ages 85 and above. Nationwide African-Americans have a higher risk of cancer than does the white population. For 1999 - 2003, the all-sites cancer incidence rate for Indiana's black population was higher than the rate for the white population (508.2 vs. 468.9), a difference that is statistically significant. The lung cancer incidence rate was significantly higher in African-Americans than in whites (89.7 vs. 79.8), and the incidence rate for prostate cancer was 61.0% higher for black males in Indiana than for white males (214.8 vs. 133.5). In addition, blacks are more likely to be diagnosed at a later stage than are whites, leading to a prostate cancer mortality rate for blacks (67.6) that is 140.6% higher than the rate for whites (28.1). In general, men are more likely to be diagnosed with cancer than are women. The Indiana 1999 - 2003 cancer incidence rate was 31.9% higher for men than for women (556.0 vs. 421.7). Women have a longer lifespan than men, hence there are more elderly women than men in the population. Since cancer incidence increases with age, more women than men may be diagnosed with cancer. This was the case in Indiana for 1995 - 1999 and 1996 - 2000. However, in Indiana male incidence rates have increased to a greater extent than have female incidence rates, so that during 1997 - 2001 and 1999 - 2003 there were slightly more men than women diagnosed with cancer (72,869 vs. 71,175). There are a few other cancers that occur more frequently in women. For example, the rate of thyroid cancer in Indiana females is 2.6 times that of males (9.8 vs. 3.8). The disparity between the 1999 - 2003 Indiana male cancer mortality rate (266.0) and the female mortality rate (175.1) was 51.9%. Despite the larger population of elderly women, more men died of cancer (33,180) than did women (30,753). A small portion of this disparity might be explained by men being slightly more likely to be diagnosed at a later stage of cancer than are women (Section 3.1, Table 3.1-7). In addition, men are more likely to be current or former smokers than are women (56.2% of men vs. 43.9% of women, Indiana Health Behavior Risk Factor Report 2005). The increased risk of cancer for smokers is important in the higher cancer incidence and mortality rates for men. For 1999 - 2003, the most frequently diagnosed cancer in Indiana was lung cancer. The next most common cancer was breast cancer, followed by prostate, colon (excluding rectum), and urinary bladder cancers. Section 1 (Tables 1-3 to 1-6) lists the most commonly diagnosed cancers by sex. For men, the most frequently diagnosed cancer was prostate, followed by lung, colon, bladder, non-Hodgkin lymphoma , and rectum. For Indiana women, breast cancer was most commonly diagnosed, followed by lung, colon, uterine body, non-Hodgkin lymphoma, and ovarian cancers. For both men and women, mortality from lung cancer (19,676 deaths, 11,523 for men and 8,151 for women, in Indiana during 1999 - 2003) was greater than that from either prostate (3,329 deaths) or breast cancer (4,591 deaths). Colon cancer (5,669 deaths) was the second leading cause of cancer death for the total Indiana population. The 1999 - 2003 lung cancer incidence rate for Indiana (80.4) was significantly higher than the national rate (60.3). See Section 1. In particular, the Indiana male lung cancer incidence rate (108.2) was 41.6% higher than that of the US (76.4). The lung cancer mortality rates for Indiana were also higher than the national rates, though not by as high a percentage as the incidence rates. The prostate cancer incidence rate for Indiana for 1999 - 2003 (140.3) was significantly lower than the US rate (173.3). Whereas, the prostate cancer mortality rate for Indiana (30.2) was higher than that of the US (29.1). These differences suggest that there is less screening for prostate cancer in Indiana than in the rest of the US. For prostate cancer, however, it is not clear that more screening is better. A similar, though less dramatic, pattern is seen for female breast cancer, where mortality can be reduced by early detection through screening. Illness and death from cancer are increasingly preventable by decreasing modifiable risk factors, increasing early detection through improved screening, and developing more effective treatments. Current estimates indicate that at least 75% and perhaps more than 90% of cancers are due to factors external to the patient, i.e., factors other than the patient's heredity, endogenous hormones, and immunologic status. More than two-thirds of these external or "environmental" factors are associated with personal lifestyle and behavior. Tobacco use accounts for approximately 30% of US cancer deaths. Numerous scientific studies have shown that involuntary exposure of nonsmokers to environmental tobacco smoke increases their risk of lung cancer and other illnesses. Another estimated 30% of cancer deaths can be attributed to dietary factors, particularly those associated with obesity, such as high dietary saturated fat and low consumption of fruits and vegetables. An estimated 5% of cancer deaths are associated with low physical activity, and an additional 3% - 7% with personal choices in the areas of sexual contacts, and childbearing. Section 3 compares Indiana State incidence and mortality rates for all sites and for the most common cancers with the rates for ten groups of counties (regions, see maps). Section 5 presents information on both incidence and mortality rates and counts by county. In Indiana with its numerous small counties (especially in the case of less common cancers), there are often many counties with only a few or no cases of a specific cancer type. To avoid compromising the confidentiality of individuals in those counties, the actual number of cases for a county is not reported if it is less than 5, and in cases where the number for any one sex is less than 5, the number for the opposite sex is not reported either. Rates based on less than 20 total cases are very unstable and can be misleading; hence, no comparison with the state should be made when there are fewer than 20 cases in a county, and those rates are flagged in Section 5. Aggregating several years of cancer data, in many cases, provides meaningful information on rates in small counties and/or for less common cancers. Appendix B (Technical Notes) gives information on rates, confidence intervals, and how to interpret them. * In general, incidence rates are the number of new cases per 100,000 persons in the population per year and, except for age-specific rates, are age-adjusted to the 2000 US standard population. See Appendix B. Age-specific incidence rates are the number of new cases occurring in persons of a specified age per 100,000 persons of that age in the population. Table of Contents
<urn:uuid:082110aa-535d-4b92-8778-5201e672e94a>
seed
- freely available Int. J. Mol. Sci. 2013, 14(3), 4655-4669; doi:10.3390/ijms14034655 Published: 26 February 2013 Abstract: Long non-coding RNAs (lncRNAs) are pervasively transcribed in the genome and are emerging as new players in tumorigenesis due to their various functions in transcriptional, posttranscriptional and epigenetic mechanisms of gene regulation. LncRNAs are deregulated in a number of cancers, demonstrating both oncogenic and tumor suppressive roles, thus suggesting their aberrant expression may be a substantial contributor in cancer development. In this review, we will summarize their emerging role in human cancer and discuss their perspectives in diagnostics as potential biomarkers. The central dogma of molecular biology postulates gene-coding through storage of genetic information and proteins as the main molecules of cellular functions, while RNA has the role of an intermediary between DNA sequence and encoded protein. The findings of the human genome project thus came as a surprise, since only 1.5% of the human genome encodes protein-coding genes [1–5]. Development of new techniques revolutionized the molecular world with evidence that at least 90% of the human genome is actively transcribed [6,7]. The human transcriptome has shown more complexity than previously assumed since the protein-coding transcripts are being a minority, compared to a more complex group of non-coding RNAs (ncRNAs), such as microRNAs (miRNAs), long non-coding RNAs (lncRNAs), small nucleolar RNAs (snoRNAs), small interfering RNAS (siRNAs), small nuclear (snRNAs), and piwi-interacting RNAs (piRNAs) [8–15]. Although initially thought to be transcriptional noise, ncRNA may play a crucial role in cellular development, physiology and pathologies. Depending on their size, ncRNAs are divided into two major groups. Transcripts shorter than 200 nucleotides are referred to as small ncRNAs, which include miRNA, siRNA, piRNA, etc. The other group is composed of lncRNA, where the transcripts lack a significant open reading frame, and have length of 200 nt up to 100 kilobases. A lncRNA can be placed into one or more of five broad categories: (1) sense, or (2) antisense, when overlapping one or more exons of another transcript on the same, or opposite, strand, respectively; (3) bidirectional, when the sequence is located on the opposite strand from a neighboring coding transcript whose transcription is initiated less than 1000 base pairs away, (4) intronic, when it is derived wholly from within an intron of a second transcript, or (5) intergenic, when it lies within the genomic interval between two genes (Figure 1). There are some lncRNAs that are transcribed by RNA polymerase III while the majority of lncRNAs are transcribed by RNA polymerase II, spliced and polyadenylated . Most of the lncRNAs are located in the cytoplasm, although there are some found in both cytoplasm and nucleus . 2. Long Non-Coding RNA Functions LncRNAs are involved in almost every step of a life cycle of genes and regulate diverse functions. Several lncRNAs can regulate gene expression at various levels, including chromatin modification, transcription, and posttranscriptional processing . So far, their role was extensively studied in epigenetic regulation, such as imprinting. Diploid organisms carry two alleles of each of the parents’ autosomal genes. In most cases, both of the alleles are expressed equally, except when a subset of genes shows imprinting in which expression is restricted by epigenetic mechanism to either maternal or paternal allele . X-inactivation (XCI) is a process that equalizes gene expression between males and females by inactivating one X in female cells . Some lncRNAs participate in global cellular behavior by controlling apoptosis, cell death and cell growth [15,20]. LncRNA can also mediate epigenetic modification by recruiting chromatin remodeling complex to specific chromatin loci, e.g., HOTAIR by polycomb repression complex 2 (PCR2) and/or lysine-specific demethylase 1 (LSD1), CCND1 by protein termed translocated in liposarcoma (TLS), and ANRIL by polycomb repression complex 1 and 2 (PCR1 and PCR2) [5,21–25]. The mode of action of some lncRNAs is interaction with their intracellular steroid receptors. Other lncRNAs function by regulating transcription through a variety of mechanisms that include interacting with RNA-binding proteins, acting as a coactivator of transcription factors, or repressing a major promoter of their target gene . In addition to chromatin modification and transcriptional regulation, lncRNAs can regulate gene expression at the posttranscriptional level. 3. Oncogenic lncRNA SRA—Steroid Receptor RNA Activator is a coactivator for steroid receptors and acts as an ncRNA found in the nucleus and cytoplasm. SRA regulates gene expression mediated by steroid receptors through complexing with proteins also containing steroid receptor coactivator 1 (SRC-1) . The SRA1 gene can also encode a protein that acts as a coactivator and corepressor . SRA levels have been found to be upregulated in breast tumors where it is assumed that increased SRA levels change the steroid receptors’ actions, contributing to breast tumorigenesis. While the expression of SRA in normal tissues is low, it is highly up-regulated in various tumors of the human breast, uterus and ovary. This evidence supports that SRA is a potential biomarker of steroid-dependent tumors . HOTAIR—HOX Antisense Intergenic RNA with a length of 2.2 kb was found in the HOXC locus and is transcribed in antisense manner . It is the first lncRNA discovered to be involved in tumorigenesis. In breast cancer, both primary and metastatic, the expression is up regulated; in the latter case up to 2000-fold increase was shown . The high expression level of HOTAIR in primary breast cancer is also correlated to metastasis, and poor survival rate . The level of HOTAIR expression is higher in patients with lymph node metastasis in hepatocellular cancer . Polycomb group proteins mediate repression of transcription of thousands of genes that control differentiation pathways during development, and have roles in stem cell pluripotency and human cancer [23,30–34]. The target of PRC2 is the HOXD locus on chromosome 2 where the PRC2 in association with HOTAIR causes the transcriptional silencing of several metastasis suppressor genes resulting in breast epithelial cells having the expression of embryonic fibroblast. Alternating the level of HOTAIR results in enhanced PRC2 repressive activity . HOTAIR acts as a molecular scaffold having two known chromatin modification complexes. The 5′ region of lncRNA binds to the PRC2 complex responsible for H3K27 methylation and the 3′ region binds to LSD1, which mediates enzymatic demethylation of H3K4 [24,30,35]. This result suggests the possible function of HOTAIR as a scaffold binding to selected histone modification enzymes and therefore causing histone modification on target genes . Although the precise mechanism is still not known, it is clear that HOTAIR remodels chromatin to promote cancer invasiveness. HOTAIR as an epigenetic regulator in gene expression is deregulated in different cancers [23,36–38]. In hepatocellular carcinoma (HCC) and HCC patients with liver transplantation, the levels of HOTAIR compared with normal liver tissue are elevated. Expression levels of HOTAIR can also be used as an independent prognostic marker for HCC recurrence and lower survival rate . HOTAIR can be a potential biomarker for the existence of lymph node metastasis in HCC . ANRIL—Antisense ncRNA in the INK4 locus Many transcripts coding for proteins have anti-sense partners, whose perturbation can alter the expression of the sense transcripts . Some of these genes are tumor suppressors, which can be epigenetically silenced by antisense ncRNA . ANRIL activates two polycomb repressor complexes, PRC1 and PRC2 [21,25], resulting in chromatin reorganization which silences the INK4b-ARF-INK4a locus encoding tumor suppressors p15INK4b, p14ARF, p16INK4a, which are active in cell cycle inhibition, senescence and stress-induced apoptosis. Overexpression of ANRIL in prostate cancer has shown silencing of INK4b-ARF-INK4a and p15/CDKN2B by heterochromatin reformation [25,41]. The repression is mediated by direct binding to combox 7 (CBX 7) and SUZ12, members of PRC1 and PRC2, respectively [21,25]. MALAT 1—Metastasis-Associated Lung Adenocarcinoma Transcript 1 This lncRNA was first associated with high metastatic potential and poor patient prognosis during a comparative screen of non-small cell lung cancer patients with or without metastatic tumors . MALAT1 is widely expressed in normal human tissues [42,43] and is found to be upregulated in a variety of human cancers of the breast, prostate, colon, liver and uterus [44–47]. The MALAT1 locus at 11q13.1 has been identified to harbor chromosomal translocation break points associated with cancer [48–50]. MALAT1 is localized in nuclear speckles and widely expressed in normal tissues [42,43], but was found to be upregulated in hepatocellular carcinoma, breast, pancreas, osteosarcoma, colon and prostate cancers [44–47,51]. It has been shown that increased expression of MALAT1 can be used as a prognostic marker for HCC patients following liver transplantation . A number of studies have implicated MALAT1 in the regulation of cell mobility, due to its high levels of expression in cancers. For example, RNA interference-mediated silencing of MALAT1 reduced the in vitro migration of lung adenocarcinoma cells by influencing the expression of motility-related genes . Recent studies on knockout MALAT1 mice have not displayed any cellular phenotype. Future studies will be needed where mice will be exposed to different stresses, such as induction of cancer, which will potentially unveil its function. It is known that MALAT1 as well as HOTAIR play vital roles in human cells but it is possible that they have no significant role in living animals under normal physiological conditions [54–56]. 4. Oncogenic and Tumor Suppressor lncRNA H19 is expressed from the maternal allele and has a pivotal role in genomic imprinting during cell growth and development . The locus contains H19 and insulin-like growing factor 2 (IGF2), which are imprinted. This leads to differential expression of both genes, H19 from maternal and IGF2 from paternal allele [57,58]. The loss of imprinting results in misexpression of H19 and was observed in many tumors including hepatocellular and bladder cancer [59,60]. This lncRNA has been linked to oncogenic and tumor suppressor properties . cMYC induces the expression of H19 in different cell types where H19 potentiates tumorigenesis . In addition c-MYC also down-regulates expression of IGF2 imprinted gene. H19 transcripts are precursors for miR-675 which functionally down-regulates the tumor suppressor gene for retinoblastoma in human colorectal cancer . Data support H19 deregulation causing either oncogenic or tumor suppressor properties, although the exact mechanism is still elusive. 5. Tumor Suppressor lncRNA MEG3—Maternally Expressed Gene 3 LncRNA MEG3 is a transcript of the maternally imprinted gene. In normal pituitary cells MEG3 is expressed, the loss of expression is observed in pituitary adenomas and the majority of meningiomas and meningioma cell lines [62,63]. MEG3 activates regulation of tumor suppressor protein p53. Normally, p53 protein levels are extremely low due to its rapid degradation via the ubiquitin-proteasome pathway. The ubiquitination of p53 is mainly mediated by MDM2, an E3 ubiquitin ligase. MEG3 down-regulates MDM2 expression, which suggests that MDM2 down-regulation is one of the mechanisms whereby MEG3 activates p53 . MEG3 significantly increases p53 protein level and stimulates p53-dependent transcription . MEG3 enhances p53 binding to target promoters such as GDF15 but not p21 and is also able to inhibit cell proliferation in the absence p53, suggesting that MEG3 is a p53 dependent and independent tumor suppressor [62–65]. GAS5—Growth Arrest-Specific 5 is widely expressed in embryonic and adult tissues. Expression is almost undetectable in growing leukemia cells and abundant in saturation density-arrested cells [66,67]. GAS5 functions as a starvation or growth arrest-linked riborepressor for the glucocorticoid receptors by binding to their DNA binding domain inhibiting the association of these receptors with their DNA recognition sequence. This suppresses the induction of several responsive genes including the gene encoding cellular inhibitor of apoptosis 2 (cIAP2), reducing cell metabolism and synthesizes cells to apoptosis [4,67]. GAS5 can induce apoptosis directly or indirectly in the prostate and breast cancer cell lines, where it was shown that GAS5 has a significantly lower expression in breast cancers compared to normal breast epithelial tissues . CCND1/Cyclin D1 is a heterogenous lncRNA transcribed from the promoter region of the Cyclin D1 gene. Cyclin D1 is a cell cycle regulator that is frequently mutated, amplified and over expressed in a variety of cancers . LncRNA recruits the RNA-binding proteinTLS, which is a key transcriptional regulatory sensor of DNA damage signals. Upon binding TLS undergoes allosteric modification, modulating activities of CREB-binding protein (CBP) and p300, resulting in inhibition of the cyclin D1 gene expression . LincRNA-p21 expression is directly induced by the p53 signaling pathway. It is required for global repression of genes that interfere with p53 function to regulate cellular apoptosis. Lincrna-p21 mediated gene repression occurs through physical interaction with RNA-binding protein hnRNP K leading to the promoters of genes being repressed in a p53 dependent manner . 6. Diagnostic Benefits of lncRNA So far, the majority of cancer biomarkers are protein-coding genes, their transcripts or the proteins. The non-coding regions are evolving as a biomarker hotspots only recently. By the advent of high-throughput sequencing, we are now able to identify deregulated expression of transcriptome at much higher resolution, what allow us to decipher smaller changes in the expression level. In the case of lncRNAs, where their main function is regulation of other genes expression, the importance of lncRNAs maintained expression is evident. Since cancer is a complicated disease, which involves many factors, molecular biomarkers are valuable diagnostic and prognostic tools that could ease the disease management. Compared to protein-coding RNAs, using lncRNA as markers is of advantage since their own expression is a better indicator of the tumor status. Many lncRNAs are now connected to cancer due to new technologies and are emerging into the field of molecular biology as new regulatory players. Several lncRNA were found to be deregulated in a wide variety of cancers (Table 1). In breast cancer research higher expressions of SRA and SRAP, compared to normal tissue were observed. Possibly SRAP expression contributes to higher survival for patient undergoing Tamoxifen treatment . The expression of MALAT 1 is elevated in osteosarcoma patients with poor response to chemotherapy, which suggests that this transcript plays a crucial role in the pathology of tumors . Additionally MALAT 1 serves as an independent prognostic marker for patient survival in early stage non-small cell lung cancer . In hepatocellular carcinoma, (HCC) definitive diagnosis of lymph node metastasis is difficult without histological evidence. It has been demonstrated that a significant correlation between HOTAIR gene expression and lymph node metastasis exists, suggesting that measuring HOTAIR lncRNA is a potential biomarker for predicting lymph node metastasis . Upregulation of HOTAIR is closely associated with gastrointestinal stromal tumor (GIST) aggressiveness and metastasis and it can be used as a potential biomarker [38,91]. MALAT1 is a powerful biomarker for HCC recurrence prediction following liver transplantation. Moreover, silencing MALAT1 activity in HCC would be a potential anticancer therapy to prevent tumor recurrence after orthotopic liver transplantation . SPRY4-IT1 expression is substantially increased in patient melanoma cell samples compared to melanocytes. The elevated expression of SPRY4-IT1 in melanoma cells, its accumulation in the cell cytoplasm, and its effects on cell dynamics suggest that the misexpression of SPRY4-IT1 may have an important role in melanoma development, and could be an early biomarker and a key regulator for melanoma pathogenesis in humans . The novel potential biomarkers can be discovered through certain types of highly expressed cancer-associated lncRNAs . Therapeutic benefit can be obtained through pathways mediating transcriptional gene silencing, especially those of tumor suppressors and oncogenes . For patients’ comfort, biomarkers should be detected in samples obtained in a non-invasive way. Desirable samples are body fluids, such as serum or urine, where circulating nuclear acids (CNAs), both DNA and RNA species, are found. CNAs are found in plasma, cell-free serum, sputum and urine [29,94–97]. PRNCR1 (prostate cancer non-coding RNA1) expression was upregulated in some of the prostate cancer cells as well as precursor lesion prostatic intraepithelial neoplasia and considered as a tumor marker . Suggestions that lncRNA can be used as biomarkers and/or drug targets have arisen from numerous studies observing the expression patterns of tumor tissues comparing to normal ones . The possible therapies arising from this knowledge would be beneficial in cases where protein target drugs have not been effective. A recent study has shown that reduced expression of ncRAN enhanced the chemotherapeutic drug in vitro. This opens another possibility of cancer treatment, where a combination of drugs would have much higher effect. Often lncRNAs exhibit tissue specific patterns that distinguish them from miRNAs and protein-coding mRNAs that are expressed from multiple tissue types. Their specificity makes them precise biomarkers for cancer diagnostics . PCA3 is a prostate-specific lncRNA overexpressed in prostate cancer. Although its functions are not understood, it was still utilized as a biomarker in a clinical test. Expression of the PCA3 transcript is determined from prostate cells in urine samples of patients [100,101]. Another lncRNA detected in body fluids is HULC, expression of which is disrupted in hepatocellular carcinomas and can be monitored in patients’ blood sera . To understand the biology of cancer it will be essential to identify, annotate lncRNAs and study their expression profiles in human tissues and diseases [103,104]. With this, the potential of lncRNAs on biology and medicine will be revealed. Long non-coding RNAs have recently arisen as new discoveries in the field of molecular biology. Since only a few individual lncRNAs have been functionally studied, still a lot of questions remain to be addressed . At the moment the full potential of cancer therapy is not yet developed. The future of it lies in specific targeting of cancer cells and specific delivery of the drugs. LncRNAs are a possible resource for developing diagnostics and therapies, although a better understanding of their function and precise mechanism through which they function are needed first . Another possibility for cancer treatment lies in combination of drugs, where one would change the expression of lncRNA in a way for chemotherapeutic drug to have a higher effect. Since probably the lncRNA function through their secondary structure special molecules could be developed to disrupt their secondary structure or bind to them to form complexes through which an inactivation of lncRNA would occur. These molecules should be highly specific in order not to disrupt other molecules and mechanisms. To discover the right molecules more studies of the complex mechanisms involving lncRNA are needed. RNA used to be just a messenger between coding genes and proteins encoded by them. However, “transcriptional noise” is turning out to be a very important part of regulation processes. With the discovery of LncRNA and their functions, the new world of molecular biology is emerging. There is much research still on the way towards a deeper understanding of regulation processes in which lncRNA is one of the important players. LncRNA deregulation in human disease is unveiling the complexity of cellular processes. Studying the mechanisms of lncRNA involvement in oncogenic and tumor suppressive pathways will lead to new cancer diagnostic markers and will pave the way to novel therapeutic targets. This work was supported by program P3-0054 of the Slovenian Research Agency. - Conflict of InterestThe authors declare no conflict of interest. - Stein, L.D. Human genome: End of the beginning. Nature 2004, 431, 915–916. - Ponting, C.P.; Belgard, T.G. Transcribed dark matter: meaning or myth? Hum. Mol. Genet 2010, 19, R162–R168. - Lander, E.S.; Linton, L.M.; Birren, B.; Nusbaum, C.; Zody, M.C.; Baldwin, J.; Devon, K.; Dewar, K.; Doyle, M.; FitzHugh, W.; et al. Initial sequencing and analysis of the human genome. Nature 2001, 49, 860–921. - Gutschner, T.; Diederichs, S. The Hallmarks of Cancer: A long non-coding RNA point of view. RNA Biol 2012, 9, 703–719. - Nie, L.; Wu, H.-J.; Hsu, J.-M.; Chang, S.-S.; LaBaff, A.; Li, C.-W.; Wang, Y.; Hsu, J.L.; Hung, M.-C. Long non-coding RNAs: Versatile master regulators of gene expression and crucial players in cancer. Am. J. Transl. Res 2012, 4, 127–150. - Birney, E.; Stamatoyannopoulos, J.A.; Dutta, A.; Guigó, R.; Gingeras, T.R.; Margulies, E.H.; Weng, Z.; Snyder, M.; Dermitzakis, E.T.; Thurman, R.E.; et al. Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Bioessays 2007, 32, 599–608. - Costa, F.F. Non-coding RNAs: Meet thy masters. Bioessays 2010, 32, 599–608. - Kapranov, P.; Willingham, A.T.; Gingeras, T.R. Genome-wide transcription and the implications for genomic organization. Nat. Rev. Genet 2007, 8, 413–423. - Frith, M.C.; Pheasant, M.; Mattick, J.S. The amazing complexity of the human transcriptome. Eur. J. Hum. Genet 2005, 13, 894–897. - Khachane, A.N.; Harrison, P.M. Mining mammalian transcript data for functional long non-coding RNAs. PLoS One 2010, 5, doi:10.1371/journal.pone.0010316. - Mattick, J.S.; Makunin, I.V. Non-coding RNA. Hum. Mol. Genet 2006, 15, R17–R29. - Guttman, M.; Amit, I.; Garber, M.; French, C.; Lin, M.F.; Feldser, D.; Huarte, M.; Zuk, O.; Carey, B.W.; Cassady, J.P.; et al. Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals. Nature 2009, 458, 223–227. - Washietl, S.; Hofacker, I.L.; Lukasser, M.; Huttenhofer, A.; Stadler, P.F. Mapping of conserved RNA secondary structures predicts thousands of functional noncoding RNAs in the human genome. Nat. Biotechnol 2005, 23, 1383–1390. - Taft, R.J.; Pang, K.C.; Mercer, T.R.; Dinger, M.; Mattick, J.S. Non-coding RNAs: Regulators of disease. J. Pathol 2010, 220, 126–139. - Sana, J.; Faltejskova, P.; Svoboda, M.; Slaby, O. Novel classes of non-coding RNAs and cancer. J. Transl. Med. 2012, 10, doi:10.1186/1479-5876-10-103. - Ponting, C.P.; Oliver, P.L.; Reik, W. Evolution and functions of long noncoding RNAs. Cell 2009, 136, 629–641. - Wang, K.C.; Chang, H.Y. Molecular mechanisms of long noncoding RNAs. Mol. Cell 2011, 43, 904–914. - Banfai, B.; Jia, H.; Khatun, J.; Wood, E.; Risk, B.; Gundling, W.E.; Kundaje, A.; Gunawardena, H.P.; Yu, Y.; Xie, L.; et al. Long noncoding RNAs are rarely translated in two human cell lines. Genome Res 2012, 22, 1646–1657. - Wilusz, J.E.; Sunwoo, H.; Spector, D.L. Long noncoding RNAs: Functional surprises from the RNA world. Genes Dev 2009, 23, 1494–1504. - Wapinski, O.; Chang, H.Y. Long noncoding RNAs and human disease. Trends. Cell Biol 2011, 21, 354–361. - Kotake, Y.; Nakagawa, T.; Kitagawa, K.; Suzuki, S.; Liu, N.; Kitagawa, M.; Xiong, Y. Long non-coding RNA ANRIL is required for the PRC2 recruitment to and silencing of p15(INK4B) tumor suppressor gene. Oncogene 2011, 30, 1956–1962. - Wang, X.; Arai, S.; Song, X.; Reichart, D.; Du, K.; Pascual, G.; Tempst, P.; Rosenfeld, M.G.; Glass, C.K.; Kurokawa, R. Induced ncRNAs allosterically modify RNA-binding proteins in cis to inhibit transcription. Nature 2008, 454, 126–130. - Gupta, R.A.; Shah, N.; Wang, K.C.; Kim, J.; Horlings, H.M.; Wong, D.J.; Tsai, M.C.; Hung, T.; Argani, P.; Rinn, J.L.; et al. Long non-coding RNA HOTAIR reprograms chromatin state to promote cancer metastasis. Nature 2010, 464, 1071–1076. - Hayami, S.; Kelly, J.D.; Cho, H.S.; Yoshimatsu, M.; Unoki, M.; Tsunoda, T.; Field, H.I.; Neal, D.E.; Yamaue, H.; Ponder, B.A.; et al. Overexpression of LSD1 contributes to human carcinogenesis through chromatin regulation in various cancers. Int. J. Cancer 2011, 128, 574–586. - Yap, K.L.; Li, S.; Munoz-Cabello, A.M.; Raguz, S.; Zeng, L.; Mujtaba, S.; Gil, J.; Walsh, M.J.; Zhou, M.M. Molecular interplay of the noncoding RNA ANRIL and methylated histone H3 lysine 27 by polycomb CBX7 in transcriptional silencing of INK4a. Mol. Cell 2010, 38, 662–674. - Lanz, R.B.; Chua, S.S.; Barron, N.; Soder, B.M.; DeMayo, F.; O’Malley, B.W. Steroid receptor RNA activator stimulates proliferation as well as apoptosis in vivo. Mol. Cell. Biol 2003, 23, 7163–7176. - Chooniedass-Kothari, S.; Vincett, D.; Yan, Y.; Cooper, C.; Hamedani, M.K.; Myal, Y.; Leygue, E. The protein encoded by the functional steroid receptor RNA activator is a new modulator of ER alpha transcriptional activity. FEBS Lett 2010, 584, 1174–1180. - Rinn, J.L.; Kertesz, M.; Wang, J.K.; Squazzo, S.L.; Xu, X.; Brugmann, S.A.; Goodnough, L.H.; Helms, J.A.; Farnham, P.J.; Segal, E.; et al. Functional Demarcation of Active and Silent Chromatin Domains in Human HOX Loci by Noncoding RNAs. Cell 2007, 129, 1311–1323. - Geng, Y.J.; Xie, S.L.; Li, Q.; Ma, J.; Wang, G.Y. Large intervening non-coding RNA HOTAIR is associated with hepatocellular carcinoma progression. J. Int. Med. Res 2011, 39, 2119–2128. - Tsai, M.C.; Manor, O.; Wan, Y.; Mosammaparast, N.; Wang, J.K.; Lan, F.; Shi, Y.; Segal, E.; Chang, H.Y. Long noncoding RNA as modular scaffold of histone modification complexes. Science 2010, 329, 689–693. - Morey, L.; Helin, K. Polycomb group protein-mediated repression of transcription. Trends. Biochem. Sci 2010, 35, 323–332. - Zhao, J.; Ohsumi, T.K.; Kung, J.T.; Ogawa, Y.; Grau, D.J.; Sarma, K.; Song, J.J.; Kingston, R.E.; Borowsky, M.; Lee, J.T. Genome-wide identification of polycomb-associated RNAs by RIP-seq. Mol. Cell 2010, 40, 939–953. - Zhang, Z.; Jones, A.; Sun, C.W.; Li, C.; Chang, C.W.; Joo, H.Y.; Dai, Q.; Mysliwiec, M.R.; Wu, L.C.; Guo, Y.; et al. PRC2 complexes with JARID2, MTF2, and esPRC2p48 in ES cells to modulate ES cell pluripotency and somatic cell reprogramming. Stem Cells 2011, 29, 229–240. - Simon, J.A.; Lange, C.A. Roles of the EZH2 histone methyltransferase in cancer epigenetics. Mutat. Res 2008, 647, 21–29. - Sirchia, S.M.; Tabano, S.; Monti, L.; Recalcati, M.P.; Gariboldi, M.; Grati, F.R.; Porta, G.; Finelli, P.; Radice, P.; Miozzo, M. Misbehaviour of XIST RNA in breast cancer cells. PLoS One 2009, 4, doi:10.1371/journal.pone.0005559. - Yang, Z.; Zhou, L.; Wu, L.M.; Lai, M.C.; Xie, H.Y.; Zhang, F.; Zheng, S.S. Overexpression of long non-coding RNA HOTAIR predicts tumor recurrence in hepatocellular carcinoma patients following liver transplantation. Ann. Surg. Oncol 2011, 18, 1243–1250. - Kogo, R.; Shimamura, T.; Mimori, K.; Kawahara, K.; Imoto, S.; Sudo, T.; Tanaka, F.; Shibata, K.; Suzuki, A.; Komune, S.; et al. Long noncoding RNA HOTAIR regulates polycomb-dependent chromatin modification and is associated with poor prognosis in colorectal cancers. Cancer Res 2011, 71, 6320–6326. - Niinuma, T.; Suzuki, H.; Nojima, M.; Nosho, K.; Yamamoto, H.; Takamaru, H.; Yamamoto, E.; Maruyama, R.; Nobuoka, T.; Miyazaki, Y.; et al. Upregulation of miR-196a and HOTAIR drive malignant character in gastrointestinal stromal tumors. Cancer Res 2012, 72, 1126–1136. - Katayama, S.; Tomaru, Y.; Kasukawa, T.; Waki, K.; Nakanishi, M.; Nakamura, M.; Nishida, H.; Yap, C.C.; Suzuki, M.; Kawai, J.; et al. Antisense transcription in the mammalian transcriptome. Science 2005, 309, 1564–1566. - Kim, W.Y.; Sharpless, N.E. The regulation of INK4/ARF in cancer and aging. Cell 2006, 127, 265–275. - Yu, W.; Gius, D.; Onyango, P.; Muldoon-Jacobs, K.; Karp, J.; Feinberg, A.P.; Cui, H. Epigenetic silencing of tumour suppressor gene p15 by its antisense RNA. Nature 2008, 451, 202–206. - Ji, P.; Diederichs, S.; Wang, W.; Boing, S.; Metzger, R.; Schneider, P.M.; Tidow, N.; Brandt, B.; Buerger, H.; Bulk, E.; et al. MALAT-1, a novel noncoding RNA, and thymosin beta4 predict metastasis and survival in early-stage non-small cell lung cancer. Oncogene 2003, 22, 8031–8041. - Hutchinson, J.N.; Ensminger, A.W.; Clemson, C.M.; Lynch, C.R.; Lawrence, J.B.; Chess, A. A screen for nuclear transcripts identifies two linked noncoding RNAs associated with SC35 splicing domains. BMC Genomics 2007, 8, doi:10.1186/1471-2164-8-39. - Guffanti, A.; Iacono, M.; Pelucchi, P.; Kim, N.; Solda, G.; Croft, L.J.; Taft, R.J.; Rizzi, E.; Askarian-Amiri, M.; Bonnal, R.J.; et al. A transcriptional sketch of a primary human breast cancer by 454 deep sequencing. BMC Genomics 2009, 10, doi:10.1186/1471-2164-10-163. - Yamada, K.; Kano, J.; Tsunoda, H.; Yoshikawa, H.; Okubo, C.; Ishiyama, T.; Noguchi, M. Phenotypic characterization of endometrial stromal sarcoma of the uterus. Cancer Sci 2006, 97, 106–112. - Lin, R.; Maeda, S.; Liu, C.; Karin, M.; Edgington, T.S. A large noncoding RNA is a marker for murine hepatocellular carcinomas and a spectrum of human carcinomas. Oncogene 2007, 26, 851–858. - Luo, J.H.; Ren, B.; Keryanov, S.; Tsang, G.C.; Reo, U.N.M.; Monga, S.P.; Storm, A.; Demetris, A.J.; Nalesnik, M.; Yu, Y.P.; et al. Transcriptomic and genomic analysis of human hepatocellular carcinomas and hepatoblastomas. Hepatology 2006, 44, 1012–1024. - Davis, I.J.; Hsi, B.L.; Arroyo, J.D.; Vargas, S.O.; Yeh, Y.A.; Motyckova, G.; Valencia, P.; Perez-Atayde, A.R.; Argani, P.; Ladanyi, M.; et al. Cloning of an Alpha-TFEB fusion in renal tumors harboring the t(6;11)(p21;q13) chromosome translocation. Proc. Natl. Acad Sci USA 2003, 100, 6051–6056. - Kuiper, R.P.; Schepens, M.; Thijssen, J.; van Asseldonk, M.; van den Berg, E.; Bridge, J.; Schuuring, E.; Schoenmakers, E.F.; van Kessel, A.G. Upregulation of the transcription factor TFEB in t(6;11)(p21;q13)-positive renal cell carcinomas due to promoter substitution. Hum. Mol. Genet 2003, 12, 1661–1669. - Rajaram, V.; Knezevich, S.; Bove, K.E.; Perry, A.; Pfeifer, J.D. DNA sequence of the translocation breakpoints in undifferentiated embryonal sarcoma arising in mesenchymal hamartoma of the liver harboring the t(11;19)(q11;q13.4) translocation. Genes Chromosomes Cancer 2007, 46, 508–513. - Fellenberg, J.; Bernd, L.; Delling, G.; Witte, D.; Zahlten-Hinguranage, A. Prognostic significance of drug-regulated genes in high-grade osteosarcoma. Mod. Pathol 2007, 20, 1085–1094. - Lai, M.C.; Yang, Z.; Zhou, L.; Zhu, Q.Q.; Xie, H.Y.; Zhang, F.; Wu, L.M.; Chen, L.M.; Zheng, S.S. Long non-coding RNA MALAT-1 overexpression predicts tumor recurrence of hepatocellular carcinoma after liver transplantation. Med. Oncol 2012, 29, 1810–1816. - Tano, K.; Mizuno, R.; Okada, T.; Rakwal, R.; Shibato, J.; Masuo, Y.; Ijiri, K.; Akimitsu, N. MALAT-1 enhances cell motility of lung adenocarcinoma cells by influencing the expression of motility-related genes. FEBS Lett 2010, 584, 4575–4580. - Nakagawa, S.; Ip, J.Y.; Shioi, G.; Tripathi, V.; Zong, X.; Hirose, T.; Prasanth, K.V. Malat1 is not an essential component of nuclear speckles in mice. RNA 2012, 18, 1487–1499. - Bickmore, W.A.; Schorderet, P.; Duboule, D. Structural and Functional Differences in the Long Non-Coding RNA Hotair in Mouse and Human. PLoS Genet. 2011, 7, doi:10.1371/journal.pgen.1002071. - Eiβmann, M.; Gutschner, T.; Hämmerle, M.; Günther, S.; Caudron-Herger, M.; Groß, M.; Schirmacher, P.; Rippe, K.; Braun, T.; Zörnig, M.; Diederichs, S. Loss of the abundant nuclear non-coding RNA MALAT1 is compatible with life and development. RNA Biol 2012, 9, 1076–1087. - Gabory, A.; Jammes, H.; Dandolo, L. The H19 locus: role of an imprinted non-coding RNA in growth and development. Bioessays 2010, 32, 473–480. - Barsyte-Lovejoy, D.; Lau, S.K.; Boutros, P.C.; Khosravi, F.; Jurisica, I.; Andrulis, I.L.; Tsao, M.S.; Penn, L.Z. The c-Myc oncogene directly induces the H19 noncoding RNA by allele-specific binding to potentiate tumorigenesis. Cancer Res 2006, 66, 5330–5337. - van Bakel, H.; Nislow, C.; Blencowe, B.J.; Hughes, T.R. Most “dark matter” transcripts are associated with known genes. PLoS Biol. 2010, 8, doi:10.1371/journal.pbio.1000371. - Oosumi, T.; Belknap, W.R.; Garlick, B. Mariner transposons in humans. Nature 1995, 378, 672–672. - Tsang, W.P.; Ng, E.K.; Ng, S.S.; Jin, H.; Yu, J.; Sung, J.J.; Kwok, T.T. Oncofetal H19-derived miR-675 regulates tumor suppressor RB in human colorectal cancer. Carcinogenesis 2010, 31, 350–358. - Gejman, R.; Batista, D.L.; Zhong, Y.; Zhou, Y.; Zhang, X.; Swearingen, B.; Stratakis, C.A.; Hedley-Whyte, E.T.; Klibanski, A. Selective loss of MEG3 expression and intergenic differentially methylated region hypermethylation in the MEG3/DLK1 locus in human clinically nonfunctioning pituitary adenomas. J. Clin. Endocrinol. Metab 2008, 93, 4119–4125. - Zhang, X.; Gejman, R.; Mahta, A.; Zhong, Y.; Rice, K.A.; Zhou, Y.; Cheunsuchon, P.; Louis, D.N.; Klibanski, A. Maternally expressed gene 3, an imprinted noncoding RNA gene, is associated with meningioma pathogenesis and progression. Cancer Res 2010, 70, 2350–2358. - Zhou, Y.; Zhang, X.; Klibanski, A. MEG3 noncoding RNA: A tumor suppressor. J. Mol. Endocrinol 2012, 48, R45–R53. - Benetatos, L.; Vartholomatos, G.; Hatzimichael, E. MEG3 imprinted gene contribution in tumorigenesis. Int. J. Cancer 2011, 129, 773–779. - Coccia, E.M.; Cicala, C.; Charlesworth, A.; Ciccarelli, C.; Rossi, G.B.; Philipson, L.; Sorrentino, V. Regulation and expression of a growth arrest-specific gene (gas5) during growth, differentiation, and development. Mol. Cell. Biol 1992, 12, 3514–3521. - Kino, T.; Hurt, D.E.; Ichijo, T.; Nader, N.; Chrousos, G.P. Noncoding RNA gas5 is a growth arrest- and starvation-associated repressor of the glucocorticoid receptor. Sci Signal 2010, 3, ra8. - Mourtada-Maarabouni, M.; Pickard, M.R.; Hedge, V.L.; Farzaneh, F.; Williams, G.T. GAS5, a non-protein-coding RNA, controls apoptosis and is downregulated in breast cancer. Oncogene 2009, 28, 195–208. - Diehl, J.A. Cycling to Cancer with Cyclin D1. Cancer Biol. Ther 2002, 1, 226–231. - Huarte, M.; Guttman, M.; Feldser, D.; Garber, M.; Koziol, M.J.; Kenzelmann-Broz, D.; Khalil, A.M.; Zuk, O.; Amit, I.; Rabani, M.; et al. A large intergenic noncoding RNA induced by p53 mediates global gene repression in the p53 response. Cell 2010, 142, 409–419. - Amaral, P.P.; Clark, M.B.; Gascoigne, D.K.; Dinger, M.E.; Mattick, J.S. lncRNAdb: A reference database for long noncoding RNAs. Nucleic Acids Res 2011, 39, D146–D151. - He, H.; Nagy, R.; Liyanarachchi, S.; Jiao, H.; Li, W.; Suster, S.; Kere, J.; de la Chapelle, A. A susceptibility locus for papillary thyroid carcinoma on chromosome 8q24. Cancer Res 2009, 69, 625–631. - Chen, W.; Böcker, W.; Brosius, J.; Tiedge, H. Expression of neural BC200 RNA in human tumours. J. Pathol 1997, 183, 345–351. - Iacoangeli, A.; Lin, Y.; Morley, E.J.; Muslimov, I.A.; Bianchi, R.; Reilly, J.; Weedon, J.; Diallo, R.; Bocker, W.; Tiedge, H. BC200 RNA in invasive and preinvasive breast cancer. Carcinogenesis 2004, 25, 2125–2133. - Chung, S.; Nakagawa, H.; Uemura, M.; Piao, L.; Ashikawa, K.; Hosono, N.; Takata, R.; Akamatsu, S.; Kawaguchi, T.; Morizono, T.; et al. Association of a novel long non-coding RNA in 8q24 with prostate cancer susceptibility. Cancer Sci 2011, 102, 245–252. - Hibi, K.; Nakamura, H.; Hirai, A.; Fujikake, Y.; Kasai, Y.; Akiyama, S.; Ito, K.; Takagi, H. Loss of H19 imprinting in esophageal cancer. Cancer Res 1996, 56, 480–482. - Fellig, Y.; Ariel, I.; Ohana, P.; Schachter, P.; Sinelnikov, I.; Birman, T.; Ayesh, S.; Schneider, T.; de Groot, N.; Czerniak, A.; et al. H19 expression in hepatic metastases from a range of human carcinomas. J. Clin. Pathol 2005, 58, 1064–1068. - Matouk, I.J.; de Groot, N.; Mezan, S.; Ayesh, S.; Abu-lail, R.; Hochberg, A.; Galun, E. The H19 non-coding RNA is essential for human tumor growth. PLoS One 2007, 2, doi:10.1371/journal.pone.0000845. - Arima, T.; Matsuda, T.; Takagi, N.; Wake, N. Association of IGF2 and H19 imprinting with choriocarcinoma development. Cancer Genet. Cytogenet 1997, 93, 39–47. - Berteaux, N.; Lottin, S.; Monte, D.; Pinte, S.; Quatannens, B.; Coll, J.; Hondermarck, H.; Curgy, J.J.; Dugimont, T.; Adriaenssens, E. H19 mRNA-like noncoding RNA promotes breast cancer cell proliferation through positive control by E2F1. J. Biol. Chem 2005, 280, 29625–29636. - Matouk, I.J.; Abbasi, I.; Hochberg, A.; Galun, E.; Dweik, H.; Akkawi, M. Highly upregulated in liver cancer noncoding RNA is overexpressed in hepatic colorectal metastasis. Eur. J. Gastroenterol. Hepatol 2009, 21, 688–692. - Panzitt, K.; Tschernatsch, M.M.; Guelly, C.; Moustafa, T.; Stradner, M.; Strohmaier, H.M.; Buck, C.R.; Denk, H.; Schroeder, R.; Trauner, M.; et al. Characterization of HULC, a novel gene with striking up-regulation in hepatocellular carcinoma, as noncoding RNA. Gastroenterology 2007, 132, 330–342. - Pasic, I.; Shlien, A.; Durbin, A.D.; Stavropoulos, D.J.; Baskin, B.; Ray, P.N.; Novokmet, A.; Malkin, D. Recurrent focal copy-number changes and loss of heterozygosity implicate two noncoding RNAs and one tumor suppressor gene at chromosome 3q13.31 in osteosarcoma. Cancer Res 2010, 70, 160–171. - Poliseno, L.; Salmena, L.; Zhang, J.; Carver, B.; Haveman, W.J.; Pandolfi, P.P. A coding-independent function of gene and pseudogene mRNAs regulates tumour biology. Nature 2010, 465, 1033–1038. - Khaitan, D.; Dinger, M.E.; Mazar, J.; Crawford, J.; Smith, M.A.; Mattick, J.S.; Perera, R.J. The melanoma-upregulated long noncoding RNA SPRY4-IT1 modulates apoptosis and invasion. Cancer Res 2011, 71, 3852–3862. - Wang, F.; Li, X.; Xie, X.; Zhao, L.; Chen, W. UCA1, a non-protein-coding RNA up-regulated in bladder carcinoma and embryo, influencing cell growth and promoting invasion. FEBS Lett 2008, 582, 1919–1927. - Wang, X.S.; Zhang, Z.; Wang, H.C.; Cai, J.L.; Xu, Q.W.; Li, M.Q.; Chen, Y.C.; Qian, X.P.; Lu, T.J.; Yu, L.Z.; et al. Rapid identification of UCA1 as a very sensitive and specific unique marker for human bladder carcinoma. Clin. Cancer Res 2006, 12, 4851–4858. - Dallosso, A.R.; Hancock, A.L.; Malik, S.; Salpekar, A.; King-Underwood, L.; Pritchard-Jones, K.; Peters, J.; Moorwood, K.; Ward, A.; Malik, K.T.; et al. Alternately spliced WT1 antisense transcripts interact with WT1 sense RNA and show epigenetic and splicing defects in cancer. RNA 2007, 13, 2287–2299. - de Kok, J.B.; Verhaegh, G.W.; Roelofs, R.W.; Hessels, D.; Kiemeney, L.A.; Aalders, T.W.; Swinkels, D.W.; Schalken, J.A. DD3(PCA3), a very sensitive and specific marker to detect prostate tumors. Cancer Res 2002, 62, 2695–2689. - Leygue, E. Steroid receptor RNA activator (SRA1): Unusual bifaceted gene products with suspected relevance to breast cancer. Nucl. Recept. Signal. 2007, 4, doi:10.1621/nrs.05006. - Qi, P.; Du, X. The long non-coding RNAs, a new cancer diagnostic and therapeutic gold mine. Mod. Pathol. 2012, doi:10.1038/modpathol.2012.160. - Gibb, E.A.; Brown, C.J.; Lam, W.L. The functional role of long non-coding RNA in human carcinomas. Mol. Cancer 2011, 10, doi:10.1186/1476-4598-10-38. - Morris, K.V. RNA-directed transcriptional gene silencing and activation in human cells. Oligonucleotides 2009, 19, 299–306. - Schöler, N.; Langer, C.; Döhner, H.; Buske, C.; Kuchenbauer, F. Serum microRNAs as a novel class of biomarkers: A comprehensive review of the literature. Exp. Hematol 2010, 38, 1126–1130. - Xie, Y.; Todd, N.W.; Liu, Z.; Zhan, M.; Fang, H.; Peng, H.; Alattar, M.; Deepak, J.; Stass, S.A.; Jiang, F. Altered miRNA expression in sputum for diagnosis of non-small cell lung cancer. Lung Cancer 2010, 67, 170–176. - Xing, L.; Todd, N.W.; Yu, L.; Fang, H.; Jiang, F. Early detection of squamous cell lung cancer in sputum by a panel of microRNA markers. Mod. Pathol 2010, 23, 1157–1164. - Kosaka, N.; Iguchi, H.; Ochiya, T. Circulating microRNA in body fluid: a new potential biomarker for cancer diagnosis and prognosis. Cancer Sci 2010, 101, 2087–2092. - Zhu, Y.; Yu, M.; Li, Z.; Kong, C.; Bi, J.; Li, J.; Gao, Z. ncRAN, a newly identified long noncoding RNA, enhances human bladder tumor growth, invasion, and survival. Urology 2011, 77, 510.e1–510.e5. - Prensner, J.R.; Iyer, M.K.; Balbin, O.A.; Dhanasekaran, S.M.; Cao, Q.; Brenner, J.C.; Laxman, B.; Asangani, I.A.; Grasso, C.S.; Kominsky, H.D.; et al. Transcriptome sequencing across a prostate cancer cohort identifies PCAT-1, an unannotated lincRNA implicated in disease progression. Nat. Biotechnol 2011, 29, 742–749. - Hessels, D.; Klein Gunnewiek, J.M.T.; van Oort, I.; Karthaus, H.F.M.; van Leenders, G.J.L.; van Balken, B.; Kiemeney, L.A.; Witjes, J.A.; Schalken, J.A. DD3PCA3-based Molecular Urine Analysis for the Diagnosis of Prostate Cancer. Eur. Urol 2003, 44, 8–16. - Tinzl, M.; Marberger, M.; Horvath, S.; Chypre, C. DD3PCA3 RNA Analysis in Urine – A New Perspective for Detecting Prostate Cancer. Eur. Urol 2004, 46, 182–187. - Muro, E.M.; Andrade-Navarro, M.A. Pseudogenes as an alternative source of natural antisense transcripts. BMC Evol. Biol. 2010, 10, doi:10.1186/1471-2148-10-338. - Morris, K.V.; Santoso, S.; Turner, A.M.; Pastori, C.; Hawkins, P.G. Bidirectional transcription directs both transcriptional gene activation and suppression in human cells. PLoS Genet. 2008, 4, doi:10.1371/journal.pgen.1000258. - Lyle, R.; Watanabe, D.; te Vruchte, D.; Lerchner, W.; Smrzka, O.W.; Wutz, A.; Schageman, J.; Hahner, L.; Davies, C.; Barlow, D.P. The imprinted antisense RNA at the Igf2r locus overlaps but does not imprint Mas1. Nat. Genet 2000, 25, 19–21. |Name||Cytoband (Size)||Cancer Types||References| |AK023948||8q24.22 (2807 nt)||Papillary thyroid carcinoma (down regulated)||| |ANRIL||9p21.3 (~3.9kb)||Prostate, leukemia||[36,41]| |BC200||2p21 (200 nt)||Breast, cervix, esophagus, lung, ovary, parotid, tongue||[73,74]| |PRNCR1||8q24.2 (13 kb)||Prostate||| |H19||11p15.5 (2.3 kb)||Bladder, lung, liver, breast, esophagus, choriocarcinoma, colon||[57,58,76–80]| |HOTAIR||12q13.13 (2.2 kb)||Breast, hepatocellular||[23,29,30,36]| |HULC||6p24.3 (~500 nt)||Hepatocellular||[4,81,82]| |LincRNA-p21||~3.1 kb||Represses p53 pathway; induces apoptosis||| |Loc285194||3q13.31 (2105 nt)||Osteosarcoma||| |Malat1||11q13.1 (7.5 kb)||breast, prostate, colon, liver, uterus||[44–47]| |MEG3||14q32.2 (1.6 kb)||Brain (down-regulated)||[62,65]| |PTNEP1||9p13.3 (3.9 kb)||Prostate||| |Spry4-it1||5q31.3 (~700 nt)||Melanoma||| |SRA||5q31.3 (1965 nt)||Breast, uterus, ovary (down-regulated)||[26,27]| |UCA1/CUDR||19p13.12 (1.4, 2.2, 2.7 kb)||Bladder, colon, cervix, lung, thyroid, liver, breast, esophagus, stomach||[86,87]| |Wt1-as||11p13 (isoforms)||acute myeloid leukemia||| |PCA3||9q21.22 (0.6–4 kb)||Prostate||| |GAS5||1q25.1 (isoforms)||Breast (down-regulated)||| © 2013 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
<urn:uuid:b3a9cb6b-e531-43f2-817d-13ca601a1aab>
seed
The Boundaries of Consciousness: Neurobiology and Neuropathology Elsevier, 2006 - Medical - 585 pages Consciousness is one of the most significant scientific problems today. Renewed interest in the nature of consciousness - a phenomenon long considered not to be scientifically explorable, as well as increasingly widespread availability of multimodal functional brain imaging techniques (EEG, ERP, MEG, fMRI and PET), now offer the possibility of detailed, integrated exploration of the neural, behavioral, and computational correlates of consciousness. The present volume aims to confront the latest theoretical insights in the scientific study of human consciousness with the most recent behavioral, neuroimaging, electrophysiological, pharmacological and neuropathological data on brain function in altered states of consciousness such as: brain death, coma, vegetative state, minimally conscious state, locked-in syndrome, dementia, epilepsy, schizophrenia, hysteria, general anesthesia, sleep, hypnosis, and hallucinations. The interest of this is threefold. First, patients with altered states of consciousness continue to represent a major clinical problem in terms of clinical assessment of consciousness and daily management. Second, the exploration of brain function in altered states of consciousness represents a unique lesional approach to the scientific study of consciousness and adds to the worldwide effort to identify the "neural correlate of consciousness". Third, new scientific insights in this field have major ethical and social implications regarding our care for these patients. What people are saying - Write a review We haven't found any reviews in the usual places. 1 What in the world is consciousness? 2 A neuroscientific approach to consciousness how and what do we measure? toward a cognitive neuroscience of human experience 5 Skill corporality and alerting capacity in an account of sensory consciousness 6 Methods for studying unconscious learning 7 Computational correlates of consciousness 8 Machine consciousness 23 Hysterical conversion and brain function precipitating factors and neural correlates 25 Neardeath experiences in cardiac arrest survivors 26 The concept and practice of brain death defining the borders of consciousness 28 Behavioral evaluation of consciousness in severe brain damage 29 Evoked potentials in severe brain injury two equations with three unknowns 9 Consciousness information integration and the brain 10 Dynamics of thalamocortical network oscillations and human perception 11 From synchronous neuronal discharges to subjective awareness? 12 Genes and experience shape brain networks of conscious control a neurological guided tour 14 The mental self cytology and components of the neural network correlates of consciousness a reappraisal of functional neuroimaging data 17 General anesthesia and the neural correlates of consciousness studies with propofol hypnosis and placeboinduced analgesia why are patients with absence seizures absent? 21 Two aspects of impaired consciousness in Alzheimers disease 22 Functional brain imaging of symptoms and cognition in schizophrenia 31 Novel aspects of the neuropathology of the vegetative state after blunt head injury 32 Using a hierarchical approach to investigate residual auditory cognition in persistent vegetative state measurements of brain function and therapeutic possibilities what is it like to be conscious but paralyzed and voiceless? 35 Braincomputer interfaces the key for the conscious brain locked into a paralyzed body 36 Neural plasticity and recovery of function clinical ethical and legal problems potentials and limitations 39 Outcome and ethics in severe brain damage toward a palliative neuroethics for disorders of consciousness absence seizures anesthesia anesthetic anosognosia areas assessment associated auditory awareness behavioral brain activity brain damage brain death brain injury brain regions brainstem cerebral blood flow changes Clin clinical Cogn cognitive coma complex components conflict consciousness cortical defined difficult disorders effects event-related potentials evoked potentials experience field findings firing first fMRI frontal Giacino gyrus HRQoL human imaging impaired influence input interactions involved Laureys lesions Llinas locked-in locked-in syndrome Maquet mechanisms medial memory metabolism minimally conscious modulation motor neural correlates neuroimaging Neurology neurons Neurophysiol Neurosci normal OBEs one’s pain parietal patients patterns perception persistent vegetative placebo positron emission tomography posterior prefrontal cortex processing propofol rCBF recovery reflect REM sleep reported response Ribary Schiff schizophrenia sciousness sensory severe brain specific stimuli studies subjects suggest syndrome task temporal thalamocortical thalamus tients tion unconscious visual voxel
<urn:uuid:92270936-e394-4a5b-82e2-6c061f960a96>
seed
Immunoglobulin electrophoresis - urine; Gammaglobulin electrophoresis - urine; Urine immunoglobulin electrophoresis; Immunoelectrophoresis - urine This is a test that detects the presence or absence of immunoglobulins in the urine and assesses the qualitative character (polyclonal vs. monoclonal) of the immunoglobulins. How the test is performed Collect a “clean-catch” (midstream) urine sample. To obtain a clean-catch sample, men or boys should wipe clean the head of the penis. Women or girls need to wash the area between the labia with soapy water and rinse well. As you start to urinate, allow a small amount to fall into the toilet bowl (this clears the urethra of contaminants). Then, in a clean container, catch about 1 to 2 ounces of urine, and remove the container from the urine stream. Give the container to the health care provider or assistant. Thoroughly wash the area around the urethra. Open a Urine collection bag (a plastic bag with an adhesive paper on one end), and place it on your infant. For males, the entire penis can be placed in the bag and the adhesive attached to the skin. For females, the bag is placed over the labia. Place a diaper over the infant (bag and all). Check your baby frequently, and remove the bag after the infant has urinated into it. For active infants, this procedure may take a couple of attempts - lively infants can displace the bag, causing an inability to obtain the specimen. The urine is drained into a container for transport back to the health care provider. Immunoelectrophoresis is a laboratory technique. Electrical charges are used to separate and identify the various immunoglobulins, using a combination of protein electrophoresis and an antigen-antibody interaction: - Protein electrophoresis indicates the presence of immunoglobulins as a group. - Immunoelectrophoresis enhances the ability to identify the specific immunoglobulins through the use of antibodies that only react with the specific proteins of interest. Specific lab technique: Monospecific (specific for only one antigen) antiserum is overlaid on the zone of the electrophoretogram (the paper graph used with protein electrophoresis), which contains the unidentified protein. The presence of a precipitin band indicates that the antigen is present for the monospecific antiserum used. How to prepare for the test Collection of the first morning urine, which is the most concentrated, may be recommended. If the collection is being taken from an infant, a couple of extra collection bags may be necessary. How the test will feel The test involves only normal urination, and there is no discomfort. Why the test is performed This test is used to roughly measure the amounts of various immunoglobulins in urine. Most often, this is used as a screening test, particularly in people who have protein in the urine (demonstrated on urinalysis or other test) when urine protein electrophoresis indicates a significant amount of globulin proteins (antibodies). Normally there is no, or only a small amount, of protein in the urine. When there is protein in the urine, it normally consists primarily of urine albumin. What abnormal results mean Immunoglobulin (antibodies) in the urine can result from kidney disorders such as IgA nephropathy or IgM nephropathy. It can also occur in other disorders such as multiple myeloma (a form of cancer). (See also immunoelectrophoresis - serum.) In some neoplastic disorders (for example, multiple myeloma or chronic lymphocytic leukemia), a single clone of lymphocytes produces one type of immunoglobulin - a monoclonal immunoglobulin. This is identifiable as monoclonal by immunoelectrophoresis. Some people have monoclonal immunoglobulins, but they do not have a neoplastic disorder. Macroglobulinemia of Waldenstrom is an additional condition under which the test may be performed. by Amalia K. Gagarina, M.S., R.D. All ArmMed Media material is provided for information only and is neither advice nor a substitute for proper medical care. Consult a qualified healthcare professional who understands your particular history for individual concerns.
<urn:uuid:391fecb0-a26c-447f-b0d5-356ccb88fa0b>
seed
Pitt CVR and Sanofi Pasteur Collaborate to Assess the Effectiveness of a Dengue Vaccine PITTSBURGH, April 15, 2014 – The University of Pittsburgh Center for Vaccine Research (CVR) and Sanofi Pasteur, the vaccines division of Sanofi, have entered a scientific collaboration to help assess the effectiveness of a dengue vaccine once introduced for immunization programs. Pitt’s CVR is creating the new test to help assess the effectiveness of Sanofi Pasteur’s dengue vaccine candidate, which aims to reduce cases of dengue and the circulation of the virus in the population. The new test will tell if a person’s immunity to the mosquito-borne virus is due to a previous natural infection or from vaccination. “Distinguishing whether a person’s immune response is from the vaccine or from infection by a mosquito can play an important role in the assessment of a candidate vaccine,” said Ernesto Marques, M.D., Ph.D., associate professor of infectious diseases and microbiology at Pitt’s CVR. “The goal of this test is to provide additional support in assessing the effectiveness of the vaccine after introduction.” Dengue disease is caused by four types of dengue virus. It occurs mostly in tropical and subtropical countries, putting about half the world’s population at risk. It is endemic in Puerto Rico and locally acquired cases re-emerged recently in the Florida Keys and Texas. There is no treatment for dengue and no vaccine to prevent it. It is estimated that around 100 million clinical cases of dengue occur annually, but a larger number of additional cases are so mild that the people who are infected don’t even realize it. Each year 500,000 people, including children, develop severe dengue, characterized by high fever, uncontrolled bleeding, respiratory distress and organ failure. “This test also could be used by the government and health agencies to manage an immunization program,” added Dr. Marques. “It will give evidence that the vaccine works and could allow doctors to determine which populations still need vaccination so they can most effectively target their immunization outreach efforts.” The Sanofi Pasteur dengue vaccine candidate was found to be safe and demonstrated protection against three of the four dengue virus types in the first efficacy clinical study, with results reported in 2012 in The Lancet , a medical journal. The study, which included 4,002 children, was conducted in a region of Thailand where dengue is highly endemic, and it was the first time a dengue vaccine candidate showed protection against the virus. Data from Sanofi Pasteur’s ongoing phase III clinical studies with over 31,000 volunteers are expected to be available later this year and will document efficacy of their vaccine in a broader population and different epidemiological environments.
<urn:uuid:75c3c554-26d8-4e95-af83-679337fed603>
seed
Disorders of the Heel, Rearfoot, and Ankle By: Chitranjan S. Ranawat, Rock G. Positano 520 pages; 547 ills; 15 April 1999; Hardback This book on Disorders of the Heel, Rearfoot, and Ankle presents the most comprehensive, detailed summary available on conditions of the heel, rearfoot, and ankle. Their are 57 internationally recognized authors represent a wide variety of fields including orthopaedic surgery, rheumatology, podiatry, radiology, sports medicine, and physical medicine. They all develop everything from anatomy and pathology to the assessment and the surgical and conservative therapies of a full range of clinical rearfoot and ankle conditions. Chapters in Disorders of the Heel, Rearfoot, and Ankle: Anatomy of the Heel, Rearfoot, and Ankle. Imaging of the Hindfoot and Ankle. Ultrasound Imaging of the Ankle and Rearfoot. Rheumatology Laboratory Evaluation. Electrodiagnosis in Heel and Foot Disorders. The Ankle and Foot During Gait. Edema and Foot Injuries: Pathophysiology and Differential Diagnosis. Tarsal Tunnel Syndrome. Biomechanics of the Heel Pad and Plantar Aponeurosis. Functional and Biomechanical Aspects of the Planar Fascia. Heel Pain Syndrome. Surgical Correction of Heel Spur and Plantar Fascia Tears. Heel Pain in the Setting of Metabolic, Infiltrative, and Bone Disorders. Heel Pain and Achilles Pain Associated with Rheumatoid Arthritis and Seronegative Spondyloarthropathies. Heel and Hindfoot Pain in the Pediatric and Adolescent Patient. Thrombophlebitis as a Cause of Heel andRearfoot Pain. Knee Pathology and Pain Related to Rearfoot and Heel Dysfunction. Reflex Sympathetic Dystrophy – Complex Regional Pain Syndrome of the Ankle and Foot. Treatment of Bone and Soft Tissue Tumors of the Foot. Stress Fractures of Rearfoot, Midfoot, Distal Tibia, and Fibula. Calcaneal Fractures: Etiology, Diagnosis, and Classification. Treatment of Calcaneal Fractures – Conservative and Surgical. Ankle Sprain: Clinical Evaluation and Current Treatment Concepts. Lateral Ankle Instability: An Overview. The Chronically Unstable Ankle: More Than a Ligamentous Problem – Anatomical, Biomechanical, and Neurological Considerations. Fractures of the Ankle. Fractures of the Tibial Pilon: Classification, Diagnosis, and Treatment. Chronic Compartment Syndromeand Shin Splint Syndrome. Peroneal Tendon Disorders. Posterior Tibial Tendon Dysfunction. Achilles Tendon Injuries. Physical Therapy of the Ankle and Rearfoot Disorders. Orthotic Therapy in the Treatment of Heel, Achilles Tendon, and Ankle Injuries. More on Plantar Fasciitis: Plantar fasciitis | Plantar fasciitis | Plantar fasciitis | Plantar fasciitis | Plantar fasciitis | Plantar fasciitis | Plantar Fasciitis http://www.runresearchjunkie.com/plantar-fasciiis-how-then-do-you-treat-it/ | Plantar Fasciitis Snake Oil
<urn:uuid:038b7bc7-d735-4ec1-b041-4d7e3c649913>
seed
Open-File Report 2010–1289 Cyanotoxins are a group of organic compounds biosynthesized intracellularly by many species of cyanobacteria found in surface water. The United States Environmental Protection Agency has listed cyanotoxins on the Safe Drinking Water Act's Contaminant Candidate List 3 for consideration for future regulation to protect public health. Cyanotoxins also pose a risk to humans and other organisms in a variety of other exposure scenarios. Accurate and precise analytical measurements of cyanotoxins are critical to the evaluation of concentrations in surface water to address the human health and ecosystem effects. A common approach to total cyanotoxin measurement involves cell membrane disruption to release the cyanotoxins to the dissolved phase followed by filtration to remove cellular debris. Several methods have been used historically, however no standard protocols exist to ensure this process is consistent between laboratories before the dissolved phase is measured by an analytical technique for cyanotoxin identification and quantitation. No systematic evaluation has been conducted comparing the multiple laboratory sample processing techniques for physical disruption of cell membrane or cyanotoxins recovery. Surface water samples collected from lakes, reservoirs, and rivers containing mixed assemblages of organisms dominated by cyanobacteria, as well as laboratory cultures of species-specific cyanobacteria, were used as part of this study evaluating multiple laboratory cell-lysis techniques in partnership with the U.S. Environmental Protection Agency. Evaluated extraction techniques included boiling, autoclaving, sonication, chemical treatment, and freeze-thaw. Both treated and untreated samples were evaluated for cell membrane integrity microscopically via light, epifluorescence, and epifluorescence in the presence of a DNA stain. The DNA stain, which does not permeate live cells with intact membrane structures, was used as an indicator for cyanotoxin release into the dissolved phase. Of the five techniques, sonication (at 70 percent) was most effective at complete cell destruction while QuikLyse™ was least effective. Autoclaving, boiling, and sequential freeze-thaw were moderately effective in physical destruction of colonies and filaments. First posted February 11, 2011 Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge. Rosen, B.H., Loftin, K.A., Smith, C.E., Lane, R.F., and Keydel, S.P., 2010, Microphotographs of cyanobacteria documenting the effects of various cell-lysis techniques: U.S. Geological Survey Open-File Report, 2010–1289, 203 p.
<urn:uuid:d9203bf2-2e3c-43d3-9667-8deb3c9ecc85>
seed
Diagnosis and Evaluation The cranial nerves are 12 pairs of nerves that emerge from the brain, as opposed to the spinal cord. Cranial nerves provide motor and sensory functions. These cranial nerves are among the most delicate nerves in the human nervous system and require surgeons who specialize in their normal and abnormal presentations. Cranial nerve disorders can cause symptoms that include intense pain, vertigo, hearing loss, weakness or paralysis. Cranial nerve issues can affect a motor nerve, called cranial nerve palsy, or affect a sensory nerve, causing pain or diminished sensation. The cause of cranial nerve damage is sometimes unknown. Other times, cranial nerve disorders are caused as a result of a brain tumor, abscess, multiple sclerosis, bleeding into and around the brain, or infections. Trigeminal Nerve and Trigeminal Neuralgia One of the twelve pairs of nerves, the trigeminal nerve, is a large nerve that carries sensation from the face to the brain. Pain associated with the trigeminal nerve can be severe and intense, and usually affects one side of the face. This condition is known as trigeminal neuralgia, or Tic Douloureaux, and can be caused when blood vessels press on the root of the trigeminal nerve, a tumor, or multiple sclerosis. Classic trigeminal neuralgia is initially treated the same as atypical facial pain (e.g. medications). If medications fail, however, then classic trigeminal neuralgia can be treated with neurosurgical intervention. Hemifacial spasm is a neurological disorder in which blood vessels constrict the seventh cranial nerve, causing muscles on one side of the face to twitch involuntarily. Hemifacial spasm can be caused by several factors: facial nerve injury, a blood vessel touching a facial nerve, or a tumor. The Penn Center for Cranial Nerve Disorders provides the most advanced options to diagnose and treat all types of cranial nerve disorders – including trigeminal neuralgia and hemifacial spasms. A thorough neurological examination, including various testing options, can be preformed to properly identify and diagnose a cranial nerve disorder.
<urn:uuid:4f0335c4-ab0c-4770-ad33-9f3d81698ccd>
seed
Pinched Nerve Overview What is a pinched nerve? A pinched nerve is a nerve under pressure. This pressure often comes from surrounding bone or soft tissues. A nerve under enough pressure will lose its ability to carry accurate signals, and those wayward signals will cause a variety of sensations in the body. For example, when a nerve is pinched or compressed, it can trigger the nerve to falsely signal pain. The compression also can limit the nerve’s ability to control the muscles it serves. One of the most common places for a pinched nerve to occur is within the spine. The spine surrounds nerve roots that innervate areas throughout the body, controlling muscle movement and sensation. These nerve roots are especially vulnerable to being pinched within the tightly packed spinal column. Beginning signs of a pinched nerve When a nerve is pinched, the initial symptoms may include localized pain. However, a spinal pinched nerve can also cause pains and sensations that are far removed from the point of pressure. For instance, a pinched nerve in the lower spine may cause shooting pains and tingling down the buttocks and legs, whereas a pinched nerve in the neck can lead to numbness or tingling in the shoulders, arms or fingers. Later signs of a pinched nerve After a nerve endures constant pressure over a longer period of time, pain and muscle weakness may increase. There also may be a loss of reflexes, dexterity and sensation in the affected area, as well as weakening (atrophy) of the affected muscles. Because a pinched nerve also might be blocked from receiving proper nutrients, the nerve fiber may eventually die and lose its ability to transmit any electrical impulses. When enough nerve fibers stop working, the skin may feel numb or a muscle may stop contracting properly. What are the nerves? As part of the body’s nervous system, nerves branch out from the brain and spinal cord to carry instructions to every area of the body. Essentially, the nerves are like electrical wires that allow signals to travel from the brain to the spinal cord to the organs and extremities, and back again. Nerves within the brain and spinal cord are part of the central nervous system, while nerves that run from the spine to other areas of the body are called peripheral nerves. The peripheral nerves originate as nerve roots that exit the spinal cord and then branch off to spread throughout the body. The nerves that travel to muscles allow the muscles to move. Nerves also pass to the skin, providing the ability to feel. After a nerve gets pinched If a nerve gets “pinched,” the flow up and down the inside of the nerve is reduced or blocked, and the nutrients stop flowing. Eventually, the nerve membrane starts to lose its ability to transmit its electrical impulses, and the nerve fiber may eventually die. When enough fibers stop working, the skin may feel numb, or a muscle may not contract. Your next steps… You can decrease your risk factors for developing a spinal pinched nerve by taking simple precautionary measures. For example, you can learn what steps to take to limit your chances of injuring your neck or back, which in turn will protect your spinal cord and its nerve roots from injury. To better educate yourself, we suggest you take a look at our pinched nerve causes section. If you think you may be showing signs of an impinged nerve, you can review our pinched nerve symptoms page for more detailed information. If you have already been diagnosed with a pinched nerve in the neck or back and you’re tired of living with pain and other symptoms, we suggest that you view our page devoted to the treatment of a pinched nerve and see how the minimally invasive procedures performed by Laser Spine Institute might help you find relief from your symptoms. You can also visit our FAQ page for some of the most commonly asked questions that the surgeons at Laser Spine Institute encounter. If you want to learn more about how we can help you, please feel free to contact us.
<urn:uuid:489a6f35-8fa9-442d-9b7b-7bb1c6bc68a6>
seed
Sulfadiazine is an oral sulfonamide anti-bacterial agent. Each tablet, for oral administration, contains 500 mg sulfadiazine. Sulfadiazine tablets are indicated in the following conditions: Urinary tract infections (primarily pyelonephritis, pyelitis, and cystitis) in the absence of obstructive uropathy or foreign bodies, when these infections are caused by susceptible strains of the following organisms: Escherichia coli, Klebsiella species, Enterobacter species, Staphylococcus aureus, Proteus mirabilis, and P. vulgaris. Sulfadiazine should be used for urinary tract infections only after use of more soluble sulfonamides has been unsuccessful. Toxoplasmosis encephalitis in patients with and without acquired immunodeficiency syndrome, as adjunctive therapy with pyrimethamine. Malaria due to chloroquine-resistant strains of Plasmodium falciparum, when used as adjunctive therapy. Prophylaxis of meningococcal meningitis when sul fonamide-sensitive group A strains are known to prevail in family groups or larger closed populations (the prophylactic usefulness of sulfonamides when group B or C infections are prevalent is not proved and may be harmful in closed population groups). Meningococcal meningitis, when the organism has been demonstrated to be susceptible. Acute otitis media due to Haemophilus influenzae, when used concomitantly with adequate doses of penicillin. Prophylaxis against recurrences of rheumatic fever, as an alternative to penicillin. H. influenzae meningitis, as adjunctive therapy with parental streptomycin. In vitro sulfonamide susceptibility tests are not always reliable. The test must be carefully coordinated with bacteriologic and clinical response. When the patient is already taking sulfonamides, follow-up cultures should have aminobenzoic acid added to the culture media. Currently, the increasing frequency of resistant organisms limits the usefulness of antibacterial agents, including the sulfonamides, especially in the treatment of recurrent and complicated urinary tract infections. Wide variation in blood levels may result with identical doses. Blood levels should be measured in patients receiving sulfonamides for serious infections. Free sulfonamide blood levels of 5 to 15 mg per 100 mL may be considered therapeutically effective for most infections, and blood levels of 12 to 15 mg per 100 mL may be considered optimal for serious infections. Twenty mg per 100 mL should be the maximum total sulfonamide level, since adverse reactions occur more frequently above this level. Published Studies Related to Sulfadiazine Topical silver sulfadiazine for the prevention of acute dermatitis during irradiation for breast cancer. [2011.10.19] PURPOSE: This study aimed to evaluate the effectiveness of topical silver sulfadiazine (SSD) in preventing acute radiation dermatitis in women receiving radiotherapy for breast cancer... CONCLUSIONS: SSD cream reduced the severity of radiation-induced skin injury compared with general skin care alone. Further studies in patients with other types of cancer and also comparing SSD cream with other topical agents are warranted. Randomized controlled single center study comparing a polyhexanide containing bio-cellulose dressing with silver sulfadiazine cream in partial-thickness dermal burns. [2011.08] OBJECTIVE: A prospective, randomized, controlled single center study was designed to evaluate clinical efficacy of a polyhexanide containing bio-cellulose dressing (group B) compared to a silver-sulfadiazine cream (group A) in sixty partial-thickness burn patients... CONCLUSION: Group B demonstrated a better and faster pain reduction in the treated partial-thickness burns, compared to group A. The results indicate the polyhexanide containing bio-cellulose dressing to be a safe and cost effective treatment for partial-thickness burns. Copyright (c) 2011 Elsevier Ltd and ISBI. All rights reserved. Prevalence of pin-site infection: the comparison between silver sulfadiazine and dry dressing among open tibial fracture patients. [2011.05] CONCLUSION: There was no significant difference in prevalence of pin-site infection between both groups (p = 0.97). Therefore, either silver sulfadiazine or dry dressing could be advocated. The efficacy of silver mesh dressing compared with silver sulfadiazine cream for the treatment of pressure ulcers. [2011.05] CONCLUSION: Silver mesh dressings is one of the choices for pressure ulcer treatment with good healing rate, minimal care and lower overall cost. Comparisons of the effects of biological membrane (amnion) and silver sulfadiazine in the management of burn wounds in children. [2011.03] This prospective study was conducted on 102 children with second-degree thermal burns to assess qualitative differences between topical silver sulfadiazine (SD) and oven-dried, radiation-sterilized human amnion as wound dressing. The patients were divided into silver SD and amniotic membrane (AM) group by random sampling technique... Clinical Trials Related to Sulfadiazine Prevention of Congenital Toxoplasmosis With Pyrimethamine + Sulfadiazine Versus Spiramycine During Pregnancy [Recruiting] Background : When a mother contracts toxoplasmosis during pregnancy, the parasite may be transmitted from to her unborn child. This results in congenital toxoplasmosis, which may cause damage to the eyes and nervous system of the child. To date, no method has been proved effective to prevent this transmission. In France, spiramycin is usually prescribed to women who have toxoplasma seroconversion in pregnancy, however its efficacy has not been determined. The standard treatment for toxoplasmosis is the combination of the antiparasitic drugs pyrimethamine and sulfadiazine, but this strategy has not been evaluated for the prevention of mother-to-child transmission. Purpose : Randomized phase 3 trial to determine whether pyrimethamine + sulfadiazine is more effective than spiramycin to prevent congenital toxoplasmosis. Pyrimethamine, Sulfadiazine, and Leucovorin in Treating Patients With Congenital Toxoplasmosis [Recruiting] RATIONALE: Congenital toxoplasmosis is an infection caused by the parasitic organism Toxoplasma gondii, and it may be passed from an infected mother to her unborn child. The mother may have mild symptoms or no symptoms; the fetus, however, may experience damage to the eyes, nervous system, skin, and ears. The newborn may have a low birth weight, enlarged liver and spleen, jaundice, anemia, petechiae, and eye damage. Giving the antiparasitic drugs pyrimethamine and sulfadiazine is standard treatment for congenital toxoplasmosis, but it is not yet known which regimen of pyrimethamine is most effective for the disease. PURPOSE: Randomized phase IV trial to determine which regimen of pyrimethamine is most effective when combined with sulfadiazine and leucovorin in treating patients who have An Open, Randomized, Multi-centre Investigation With Mepilex Ag Versus Silver Sulfadiazine in the Treatment of Deep Partial Thickness Burn Injuries. [Recruiting] The purpose is to compare time to healing using absorbent foam silver dressing (Mepilex Ag) compared to a silver sulfadiazine (SSD) 1% cream in the treatment of partial thickness burn injuries. 284 in-patients in 8-12 centres in China will be evaluated. Treatment period will be up to 4 weeks with either Mepilex Ag or SSD. SSD vs Collagenase in Pediatric Burn Patients [Recruiting] The objective of this study is to evaluate the outcomes of children with burn injury with regard to the utilization of Silver sulfadiazine (SSD) cream and Collagenase ointment. The primary outcome variable will be need for skin grafting. The specific aim of the study is to prospectively collect data to determine if SSD is superior to Collagenase with regard to avoiding the need for skin grafting. Topical Collagen-Silver Versus Standard Care Following Removal of Ingrown Nails [Recruiting] This study's purpose is to prospectively determine whether topical therapy with an oxidized regenerated cellulose collagen-silver compound is more effective than the current standard of topical antibiotic therapy for care following the removal of an ingrown toenail. Eighty adult patients with ingrown toenails will be recruited. Each patient will randomly be assigned to apply either topical silver sulfadiazine cream (standard antibiotic) or the novel collagen-silver compound to their nail bed daily, following removal of the ingrown portion of nail. Patients will return for follow up visits weekly, until healing has occurred or twelve weeks have passed. Healing will be defined as resolution of drainage and inflammatory changes surrounding the nail border. Reports of Suspected Sulfadiazine Side Effects Stevens-Johnson Syndrome (10), Drug Rash With Eosinophilia and Systemic Symptoms (10), Loss of Consciousness (9), Gastric Ulcer Haemorrhage (9), Drug Hypersensitivity (6), Renal Failure (6), Cytolytic Hepatitis (5), Myocarditis (4), more >> Page last updated: 2011-12-09
<urn:uuid:9a8b2a00-8a9b-45b6-b37d-67f3f72b524e>
seed
A recent supplement in the Wall Street Journal discussed the importance of clinical trials, and provided an introduction to the clinical research process. The guide was written by various researchers and physicians in the clinical research community, and included segments from groups such as the Association of Clinical Research Professionals (ACRP). Most commonly, clinical trials are used to test the safety and effectiveness of drugs and devices. Usually, they are sponsored by pharmaceutical companies and are conducted by research teams that include doctors and other medical professionals. Typically, trials are typically conducted in four phases: - Phase one is when generally healthy people are given the medication to test if the ingestion of the pill or treatment will have no adverse, toxicological effect; - Phases two and three dive deeper into the safety, effectiveness, and dosage of the medication, and it’s after these stages when the FDA would approve the drug or device; and - Phase four examines new uses for previously approved treatments. All of the phases in clinical trials are governed by strict protocols, and are overseen by many regulatory bodies, from the Food and Drug Administration (FDA) to small Independent Review Boards (IRBs). IRBs are a group of independent medical experts, ethicists, as well as lay people. Researchers report periodically to the IRB, outlining such things as contact with patients, the tests conducted, the results recorded and even the side effects reported. IRBs are accredited by the Association for the Accreditation of Human Research Protection Programs (AAHRPP). As Ken Getz, founder and chairman of the Center for Information and Study on Clinical Research Participation (CISCRP) noted, “there’s no question that clinical trials will play a large and growing role in the medical options” patients have today and in the future. In fact, nearly 4,000 experimental drug therapies are in active clinical trials today and that number will continue to grow as improvements are made in detecting disease, in understanding the root causes of acute and chronic illnesses, and in discovering medical innovations. Mr. Getz also predicted that “in the not-so-distant future it will be more common for clinical trials to be discussed during routine visits with the doctor as electronic health records and clinical research converge.” This prediction is much different from how most people stumble upon clinical trials today. Usually, patients learn about clinical trials only “when faced with the sudden prospect of a serious, often life threatening, illness for which no marketed medication is available or adequate.” Since patients choose to wait, they are forced to rush through information about clinical trials they have received from a physician or nurse or personal search on the Internet, which can leave them feeling overwhelmed and confused. If patients were able to take a closer look at the impact clinical trials have on the lives of volunteers and future generations, such decisions would not be as hard. Impact of Clinical Trials It is no coincidence today that “the age-adjusted death rate in the US for coronary heart disease was cut in half from 1980-2000” because much of this progress was the result of investing in basic and clinical research. In fact, while half of the decrease is attributed to reductions in cholesterol levels, blood pressure, and smoking, the other “half of this decrease can be directly attributed to medical therapies validated in clinical trials.” The overwhelming majority of these trials were made possible because of the “resources spent on clinical development of new therapies, which have been provided by pharmaceutical, biotechnology, and device companies.” As Judith M. Kramer, MD, MS, an associate professor of medicine at Duke University noted, “without the contributions of these organizations, along with those of the health professionals and patients who participate in clinical trials, public health in the US would not be what it is today.” Critics of clinical trials believe they are unnecessary, profit-driven “experiments” conducted on humans. What these people do not realize is that “doctors don’t always know what treatment is best because objective comparison of large numbers of patients is needed to sort out the truth about benefits and risks.” As a result, doctors recommend patients volunteer in clinical trials only when it is unclear which treatment option being tested is best. But patients should not be reluctant to volunteer for clinical trials because as history shows, they have saved the lives of millions. For example, it was a randomized trial of the Salk polio vaccine in over 600,000 school children that led to the approval of the first preventive treatment for that disease, and with the later addition of an oral vaccine, polio has been nearly eradicated in the US. Likewise, measles was nearly eliminated by a vaccine tested in clinical trials, and clinical trials and their participants also contributed to the scientific foundation for tuberculosis policies still adhered to today. Consequently, without clinical trials “these diseases would still be a danger to America’s children today.” Since the success from clinical trials has led to numerous benefits for patients, it was noted that more funding is needed for cancer research. Without the necessary funds, the pace of research will slow; key discoveries will be delayed; and the implementation of new strategies to control cancer will take years longer than necessary. Finding money during the economic downturn has played a role in reduced research funding. Some people believe that industry is conducting clinical research in developing countries to make up for decreases in funding, and to increase profits. “This is rarely the case because researchers improve the standard of care in these areas by giving medical training, leaving behind valuable equipment, and forming partnerships with communities.” Additionally, companies are not given many other options to find ways to keep funding research, especially when NIH changes its process for funding research by requiring groups to compete for taxpayer dollars devoted to clinical trials whereas before they were given the money as grants. As a result, industry will have to add even more than the 90 percent of funding for research it already provides. The reality is, industry takes trials off shore because they are able to do more, find new cures and better treatments with less. And companies are happy to do this for patients because a lot of pressure is “being placed on research professionals to be sure they’re doing their work as efficiently as possible.” The supplement also included comments from patients who have participated in clinical trials. One patient noted that “taking part in trials made her feel like she had some control over the course of her disease.” When faced with the tragic news of illness, patients overwhelmingly would take the chance of doing something rather than nothing. As one patient noted, she ““felt very threatened by this disease and wanted to take aggressive steps to fight it.” That is why clinical trial participants told others to “have an open mind to being a clinical trial subject because you can learn all about the trial’s purpose and requirements and then decide whether to go for it.” Whether taking medicine or participating in a clinical trial, the reality is “people are just as likely to experience side effects while taking approved medication as they would be taking trial compounds.” But participants can experience several benefits when involved in clinical trials, such as new treatment options, which are important because most participants are not satisfied with their current treatment. And if “a risk is known, that risk has to be reported to the ethics board and to the investigators conducting the study so people are apprised of what information has surfaced about that product.” Behind every medicine and intervention that people have ever taken, are thousands of patients who have volunteered to participate in clinical trials, which have led to many breakthroughs in disease prevention and treatment in the last half-century. Without the willingness of these individuals, many would have suffered. It is also important to recognize that “clinical research is not always devoted to finding the next “blockbuster” drug. Clinical trials also can contribute invaluable information about the benefits and safety of existing therapies, providing doctors and patients with reliable information for choosing between alternative treatments. Ultimately, because “every medicine or medical device must be fully vetted through closely monitored and highly regulated clinical trials to insure their safety and effectiveness,” patients receiving medical care should be encouraged to participate in clinical trials. For those people who participate in clinical trials, they are the heroes that are helping to develop the new drugs, devices, biologics, and treatments for the future, and improving the care of all Americans.
<urn:uuid:edb1277d-c758-438c-a4a8-cd571b512ce1>
seed
Oral malodor has been recognized in the literature since ancient times, but in the last five to six years it has increasingly come to the forefront of public and dental professional awareness1. Approximately 40-50% of dentists see 6-7 self-proclaimed oral malodor patients per week2. Standard diagnosis and treatment for oral malodor in the routine care of each patient has not been established in the dental or the medical field. However the transfer of knowledge is increasing because of pioneering researchers and clinicians that have developed reputable clinics dealing with this condition. Dental and medical schools must incorporate diagnosis and treatment of oral malodor in their curriculum, so that the future generations of clinicians can effectively treat this condition. To date, there have been four international conferences where the experts in the field have gathered and published their observations and research findings. The fourth international conference was held at the School of Dentistry, University of California (UCLA) and it a was big success and demonstrated a continued enthusiasm towards further meetings and scientific research in the area of oral malodor. Although this area of research has been ridiculed, at least 50% of the population suffers from a chronic oral malodor condition by which individuals experience personal discomfort and social embarrassment leading to emotional distress. The consequences of oral malodor may be more than social; it may reflect serious local or systemic conditions. Oral Malodor research has gained momentum with increasing suspicions being directed at the sulfur-producing bacteria as the primary source of this condition. Oral And Non-Oral Causes Oral malodor can be caused by many localized and systemic disorders. Oral Malodor (OM) caused by normal physiological processes and behaviors is usually transitory. Non pathologic OM may due to hunger, low levels of salivation during sleep food debris, prescription drugs and smoking3. Chronic or pathological halitosis stems from oral or non- oral sources. In addition there appear to be several other metabolic conditions involving enzymatic and transport anomalies (such as Trimethylaminuria) which lead to systemic production of volatile malodors that manifest themselves as halitosis and/or altered chemoreception4. Some of the oral causes are periodontal disease, gingivitis, and plaque coating on the dorsum of the tongue. OM may be aggravated by a reduction in salivary flow. Radiation therapy, Sjorrgen’s Syndrome, some lung conditions, including cancer, peritonsillar abscess, cancer of the pharynx and cryptic tonsils can also contribute to OM5. Nasal problems such as postnasal drip that falls at the posterior dorsum of the tongue may exacerbate the oral malodor condition. Odor generated in this manner can be easily distinguished from mouth odor by comparing the odor exiting the mouth or nose6. The non-oral causes of OM include diabetic ketosis, uremia, gastrointestinal conditions, and irregular bowel movement, hepatic and renal failure and certain types of carcinomas such as leukemia. The accurate clinical labeling and interpretation of different oral malodors both contribute to the diagnosis and treatment of underlying disease7 (Table 9-1). Taste and smell can be altered due to facial injuries, cosmetic surgery radiation and olfactory epithilium located on the dorsal aspect of the nose8. A relationship between gastrointestinal diseases such as gastritis and oral malodor has not been established. However, oral malodor has been reported in some patients with a history of gastritis, or duodenal and gastric ulcers9. Saliva plays a central role in the formation of oral malodor. Such formation has its basis due to bacterial putrefaction, the degradation of proteins, and the resulting amino acids produced by microorganisms10. Many patients with a chief complaint of oral malodor have some level of gingival and or periodontal pathology sufficient to be the etiology, but clearly periodontal pathology is not a prerequisite for production of oral malodor11. Medications such as antimicrobial agents, antirheumatic, anti hypertentive, antidepressants and analgesics may cause altered taste and xerostomia. OM in healthy patients arises from the oral cavity and generally originates on the tongue, dorsum5,12,13,14,15. The sulfur producing anaerobic bacteria appears to be the primary source of this odors16. The large surface area of the tongue and its papillary structure allow it to retain food and debris. This is an excellent putrefactive habitat for gram negative anaerobes that metabolize proteins as an energy source. The bacteria hydrolyze the proteins to amino acids, three of which contain sulfur functional groups and are the precursors to volatile sulfur compounds (VSC’s). These gaseous substances, responsible for malodor, consist primarily of hydrogen sulfide (H2S), dimethyl sulfide [(CH3)2S], methyl mercaptan (CH3SH) and sulfur dioxide (SO2)10,12,14,15 . Cadaverine levels have been reported to be associated with oral malodor and this association may be independent of VSC17 . Subjects challenged with cysteine rinses produced high oral concentrations of VSC, which thus seems to be a major substrate for VSC production. The other sulfur-containing substrates had much less effect. It was found that the tongue was the major site for VSC production18. The Tongue Plaque Coating Research suggests that the tongue is the primary site in the production of OM. The dorsoposterior surface of the tongue has been identified as the principal location for the intraoral generation of VSC’s19. The tongue is a haven for the growth of microorganisms since the papillary nature of the tongue dorsum creates a unique ecological site that provides an extremely large surface area, favoring the accumulation of oral bacteria. The proteolytic, anaerobic bacteria that reside on the tongue play an essential part in the development of oral malodor. The presence of tongue coating has been shown to have a correlation with the density or total number of bacteria in the tongue plaque coating20. The weight of the tongue coating in periodontal patients was elevated to 90 mg, while the VSC was increased by a factor of four. The CH3SH/H2S fraction was increased 30-fold when compared with individuals with healthy periodontium. This high ratio of amino acids can be due to free amino acids in the cervicular fluid when compared with those of L-cysteine19. The BANA (Benzoyl-DL-arginine-2 napthylamide) test has been used to detect T.denticola and P.gingivalis. The two organisms that may contribute to oral malodor can be easily detected by their capacity to hydrolyze BANA a trypsin-like substrate. BANA scores are associated with a component of oral malodor, which is independent of volatile sulfide measurements, and suggest its use as an adjunct test to volatile sulfide measurement21. Higher mouth odor organoleptic scores are associated with heavy tongue coating and correlate with the bacterial density on the tongue and it also correlates to BANA-hydrolyzing bacteria–T.denticola, P.gingivalis, and Bacteroides forsythus22. Microbiota Associated With Oral Malodor The actual bacterial species that cause OM have yet to be identified from among the 300 plus bacterial species in the mouth. Putrefaction is thought to occur under anaerobic conditions, involving a range of gram-negative bacteria such as Fusobacterium, Veillonella, T.denticola, P. gingivalis, Bacteroides and Peptostreptococcus22,23. Studies have shown that essentially all odor production is a result of gram-negative bacterial metabolism and that the gram-positive bacteria contribute very little odor24. Fusobacterium nucleatum is one of the predominant organisms associated with gingivitis and periodontitis and this organism produces high levels of VSC’s. The nutrients for the bacteria are provided by oral fluids, tissue and food debris. Methionine is reduced to methyl mercaptan and cysteine. Cysteine is reduced to cystine, which is further reduced to hydrogen sulfide in the presence of sulfhydrase-positive microbes. This activity is favored at a pH of 7.2 and inhibited at a pH of 6.5 10,12,14,15. Isolates of Klebsiella and Enterobacter emitted foul odors in vitro which resembled bad breath with concomitant production of volatile sulfides and Cadaverine both compounds related to bad breath in denture wearers25. The amounts of volatile sulfur compounds (VSC) and methyl mercaptan/hydrogen sulfide ratio in mouth air from patients with periodontal involvement were reported to be eight times greater than those of control subjects15. Oral Malodor Assessment Parameters One major research problem that must be tackled is the lack of an established gold standard for rapidly measuring OM condition. The objective assessment of oral malodor is still best performed by the human sense of smell (direct sniffing-organoleptic method) but more quantifiable measures are being developed. At present, confidant feedback and expert odor (organoleptic) judges are the most commonly used approaches. Both assessments use a 0-5 scale in order to consistently quantify the odor (0= No odor present, 1= Barely noticeable odor, 2= Slight but clearly noticeable odor, 3= Moderate odor, 4= Strong offensive odor, 5= Extremely foul odor). Individuals are instructed to refrain from using any dental products, eating or using deodorants of fragrances four hours prior to the visit to the clinic Individuals are also advised to bring their confidante or friends to assess their oral malodor. (Table 9-2) In order to create a reproducible assessment, subjects are instructed to close their mouth for two minutes and not to swallow during that period. After two minutes the subject breathes out gently, at a distance of 10 cm from the nose of the their counterpart and the organoleptic odors are assessed26. In order to reduce inter-examiner variations, a panel consisting of several experienced judges is often employed. A study on the inter-examiner reproducibility indicates that there is some co-relation, albeit poor27. Gender and age influence the performance of an organoleptic judge. Females have a better olfactory sense and it decreases with age. Dentists and periodontists may not be ideal judges if they do not use masks on a daily basis28. OM can be analyzed using gas chromatography (GC) coupled with flame photometric detection29. This allows separation and quantitative measurements of the individual gasses. However the equipment necessary is expensive and requires skilled personnel to operate it. This equipment is also cumbersome and the analysis is time consuming. As a result, GC cannot be used in the dental office and is not always used in OM clinical trials. Recently a closed-loop trapping system followed by off-line high resolution gas chromatography ion trap detection was used for detection of compounds from saliva and tongue coating samples30. Numerous volatile components were detected ranging from ketones to many unknowns. Adding casein (to provide cysteine and methionine) during incubation led to the appearance of nine new sulfur-containing compounds. Portable Sulfide Meter The portable sulfide meter (Halimeter®–Interscan Corporation, Chatsworth, CA.) has been widely used over the last few years in OM testing. The portable sulfide meter uses an electrochemical, voltametric sensor which generates a signal when it is exposed to sulfide and mercaptan gases and measures the concentration of hydrogen sulfide gas in parts per billion. The halimeter is portable and does not require skilled personnel for operation. The main disadvantages of using this instrument are the necessity of periodic re-calibration and the measurements cannot be made in the presence of ethanol or essential oils27. In other words, the measurements may be affected if the subject is wearing perfume, hair spray, deodorant, etc. In addition, this limitation does not allow the assessment of mouthwash efficacy until after these components have been thoroughly rinsed out or dissipated. The Electronic Nose The “Electronic Nose” is a hand held device, being developed to rapidly classify the chemicals in unidentified vapor. Its application by scientists and personnel in the medical and dental field as well as it is hoped that this technology will be inexpensive, miniaturizable and adaptable to practically any odor detecting task31. If the Electronic Nose can learn to “smell” in a quantifiable and reproducible manner, this tool will be a revolutionary assessment technique in the field of OM. This device is based on sensor technology that can smell and produce unique fingerprints for distinct odors. Preliminary data indicates that this device has a potential to be used as a diagnostic tool to detect odors. Management Of Oral Malodor A large number of so called “Fresh Breath Clinics” are offering diagnostic and treatment services for patient complaints of oral malodor. There are no accepted standards of care for these services, and the clinical protocols vary widely. A thorough medical, dental and halitosis history is necessary to determine whether the patient’s complaint of bad breath is due to oral causes or not32. It is important to determine the source of oral malodor; complaints about bad taste should be noted. In most cases patients that complain of bad taste may not have bad breath. The taste disorders may be due to other causes33. It has been reported that in approximately 8% of the individuals the odor was caused by tonsillitis, sinusitis, or a foreign body in the nose34. This percentage of individuals may be higher, additional research is needed in this area. Approximately 80-90% of the oral malodor originates from the dorsum of the tongue. Therefore the treatments targeted towards reduction of the oral malodor will require antimicrobial components directed against the tongue microbiota. Treatment of OM is important not only because it helps patients to achieve self-confidence but also because the evidence indicates that VSC’s can be toxic to periodontal tissues even when present at extremely low concentrations35. The best way to treat OM is to ensure that patients practice good oral hygiene and that their dentition is properly maintained36 (Table 9-3). Traditional procedures of scaling and root planing can be effective for patients with OM caused by periodontitis37. All patients should be instructed in proper tooth brushing and flossing and tongue cleaning. Mouthrinses should be recommended based on scientific evidence. Caution should be exercised and professional advice should be sought as to administration and type of mouthrinse to be used. Tongue scraping should be demonstrated and patients should be asked to demonstrate to the dental hygienist the appropriate use of tongue scrapers. The tongue has a tendency to curl up while tongue scraping therefore a combination of flexible tongue scrapers and tongue scrapers with handles should be recommended to the patients. The saliva functions as an antibacterial, antiviral, antifungal, buffering and cleaning agent38 and so any treatment that increases saliva flow and tongue action, including the chewing of fibrous vegetables and sugarless gum, will help decrease OM39. Finally, oral rinses can be used as supplement good oral hygiene practices. Mouthwashes have been used as chemical approach to combat oral malodor. Mouth rinsing is a common oral hygiene dating back to ancient times40. Antibacterial components such as cetylpyridinium chloride, Chlorhexidine, Triclosan, essential oils, quaternary ammonium compounds, benzalkonium chloride hydrogen peroxide, sodium bicarbonate41, zinc salts (Table 9-4) and combinations have been considered along with mechanical approaches to reduce oral malodor. Any successful mouthrinse formulation must balance the elimination of the responsible microbes while maintaining the normal flora and preventing an overgrowth of opportunistic pathogens. Most commercially available mouthrinses only mask odors and provide little antiseptic function. Even when these mouthrinses do contain antiseptic substances, the effects are usually not long-lasting42,43. The microbes survive antiseptic attacks by being protected under thick layers of plaque and mucus12. Many commercially available rinses contain alcohol as an antiseptic and a flavor enhancer. The most prevalent problem with ethanol is that it can dry the oral tissues. This condition in itself can actually induce OM. In addition, there is some controversy as to whether or not the use of alcohol rinses are associated with oral cancer44,45. The FDA states that there is no evidence to support the removal of alcohol from over-the-counter products but alcohol-free mouthrinses are becoming increasingly popular. Clinical trials conclude that zinc mouthrinses are effective for reducing OM in-patients with good oral health39. Zinc rinses (in chloride, citrate or acetate form) have been found to reduce oral VSC concentrations for greater than three hours. The zinc ion may counteract the toxicity of the VSC’s and it functions as an odor inhibitor by preventing disulfide group reduction to thiols and by reaction with the thiols groups in VSC’s. It has been reported a zinc- based rinse was more effective as compared to chlorine dioxide based rinse when both rinses were used twice a day for 60 seconds over a 6-week period46. Zinc containing chewing gum has been shown to reduce oral malodor47. Chlorhexidine digluconate is useful in decreasing gingivitis and plaque buildup. It is one of two active ingredients in mouthrinses that has been shown to reduce gingivitis in long-term clinical trials and appears to be the most effective anti-plaque and anti-gingivitis agent known today12,48. The efficacy of chlorhexidine as a mouthrinse to control OM has not been studied extensively. The primary side effect of chlorhexidine is the discoloration of the teeth and tongue. In addition, an important consideration for long-term use is its potential to disrupt the oral microbial balance, causing some resistant strains to flourish, such as Streptococcus viridans49. The effect of 1-stage full mouth disinfection in periodontitis patients (Scaling and root planing of all pockets within 24 hrs together with the application of Chlorhexidine to all intra-oral niches followed by chlorhexidine rinsing for 2 months) resulted in a significant improvement in oral malodor when compared to a fractional periodontal therapy (consecutive root planings per quadrant, at a 1 to 2 week interval)50. While Chlorhexidine appears to be clinically effective from these open-design clinical studies, it is not an agent that should be used routinely, or for long periods of time, in the control of oral malodor, because of its side effects. Mouth rinse containing Chlorhexidine, Cetylpyridium chloride and Zinc lactate was evaluated in a clinical study for two weeks. Eight subjects participated in this pilot study and this formulation showed improvement in organoleptic scores and a trend to reduce tongue and saliva microflora51. Antimalodor properties of chlorhexidine spray 0.2% chlorhexidine mouthrinse 0.2% and sanguarine-zinc mouthrinse were evaluated on morning breath. Oral malodor parameters were assessed before breakfast and four hours later after lunch. Results indicated that a sanguarine-zinc solution had a short effect as compared to chlorhexidine that lasted longer52. Chlorine Dioxide Rinses Chlorine Dioxide (ClO2) is a strong oxidizing agent that has a high redox capacity with compounds containing sulfur. Chlorine dioxide is also used in water disinfection and in food processing equipment sanitation and functions best at a neutral pH. Commercially available mouthrinses are a solution of sodium chlorite since chlorine dioxide readily loses its activity39.Further independent clinical investigations are needed to substantiate the effectiveness of sodium chlorite containing rinses for the control of OM. In fact, chlorine dioxide, the agent most widely touted on the Internet, has no published clinical studies (as of December, 1999) to substantiate the claims to reduce oral malodor. Benzalkonium chloride in conjunction with sodium chlorate has been shown to be effective in reducing oral malodor. In this pilot study subjects with mild to severe periodontitis were instructed to use the mouthwash twice a day for a period of six weeks and periodontal and oral malodor parameters were assessed53. Triclosan (2,4,4′-trichloro-2′-hydroxydiphenylether) is a broad spectrum nonionic antimicrobial agent. This lipid soluble substance has been found to be effective against most types of oral cavity bacteria54. A combined zinc and triclosan mouthrinse system has been shown to have a cumulative effect, with the reduction of malodor increasing with the duration of the product use39. Two-phase oil-water mouthrinses have been tested for their ability to control OM. A clinical trial reported significant long-term reductions in OM from the whole mouth and the tongue dorsum posterior 55. The rinse is thought to reduce odor-producing microbes on the tongue because there is a polar attraction between the oil droplets and the bacterial cells. The two-phase rinse has been shown to significantly decrease the level of VSC’s eight to ten hours after use, although not as effectively as a 0.2% chlorhexidine rinse56. Positive controls such as Chlorhexidine and Listerine® that had previously shown to reduce organoleptic scores were used in these clinical studies. The potential of hydrogen peroxide to reduce levels of salivary thiol precursors of oral malodor has been investigated. Using analytical procedures, percent reduction in salivary thiols levels post treatment compared to baseline was found to be 59%57. Topical Antimicrobial Agents Azulene ointment with a small dose of Clindamycin was used topically in eight patients with maxillary cancer to inhibit oral malodor that originates from a gauze tamponade applied to the postoperative maxillary bone defect. The malodor was markedly decreased or eliminated in all cases. Anaerobic bacteria such as Porphyromonas and Peptostreptococcus involved in generation of malodor also became undetectable58. Breathnol is a proprietary mixture of edible flavors, which was evaluated in a clinical study, and this formulation reduced oral malodor for at least 3 hours59. Certain lozenges, chewing gums, and mints have been reported to reduce tongue dorsum malodor60. Some of the natural controls for oral malodor include gum containing tea extract. Also recommended are natural deodorants such as copper chlorophyll and sodium chlorophyllin. Alternative dental health services suggest the use of chlorophyll oral rinses in addition to spirulina and algae products. Many of the mouthrinses available today are being used for the prevention and or treatment of oral malodor. Much more research is required to develop an efficacious mouthrinse for the alleviation of oral malodor. The treatment of oral malodor is relatively a new field in dentistry and many of the treatments thus far have involved a trial and error approach, but the knowledge and experience gained so far will hopefully facilitate clinical investigations in this field and eventually lead to improved diagnostic techniques and treatment products. 1. Nachnani S, Oral Malodor: A Brief Review CDHA Journal 1999; 14 (2): 13-15 2. Survey conducted at ADA reveals interesting trends. Dent Econ 1995; Dec: 6 3. Scully C, Porter R, Greenman J. What to do about halitosis? Brit Med J 1994; 308: 217-218 4. Preti G, Clark L, Cowart BJ et al. Non oral etiologies of oral malodor and altered chemosensation. J Periodontol 1992 Sep; 63 (9): 790-6 5. Young K, Oxtoby A. Field EA. Halitosis: A review. Dental Update 1993,20 57-61 6. Rosenberg M. Clinical assessment of bad breath: current concepts J Am Dent Assoc 1996 Apr; 127(4): 475-82 7. Touyz LZ. Oral Malodor–a review. J Can Dent Assoc 1993 Jul; (7): 607-10 8. Ship JA. Gastatory and olfactory considerations. Examination and treatment in general practice. J Am Dent Assoc 1993; 31:53-73. 9. Tiomny E, Arber N Moshkowitz M, Peled Y, Gilat T. Halitosis and Helicobacter pylori. J Clin Gastroenterl 1993; 15: 236- 237. 10. Kleinberg I, Westbay G. Salivary metabolic factors and involved in oral malodor formation. J periodont 1992 Sep; 63 (9): 768- 75 11. Newman M.G. The Role of periodontitis in Oral Malodor: Clinical Perspectives Bad breath: A multidisciplinary approach. van Daniel Steenberghe and Mel Rosenberg Editors 12. Rosenberg M, Editor. Bad Breath: research perspectives. Ramot Pub. Tel Aviv, 1995 13. Tessier JF, Kulkarni GV. Bad Breath: etiology and treatment. Periodontics Oral Health Oct 1991; 19-24 14. Tonzetich J. Production and origin of oral malodor: a review of mechanisms and methods of analysis. J Periodontol, 1977; 28: 13-20 15. Yaegaki K, Sanada K. Biochemical and clinical factors influencing oral malodor in periodontal patients. J Periodontol, 1992; 63: 783-89 16. Clark G, Nachnani S, Messadi D, CDA Journal Vol 25, No 2 February 1997 17. Goldberg S, Kozlovsky A, Gordon D, Gelernter I, Sintov A, Rosenberg M. Cadaverine as a putative component of oral malodor. J Dent Res 1994 Jun; 73(6): 1168-72. 18. Waler S. On the transformation of sulfur-containing amino acids and peptides to volatile sulfur compounds (VSC) in the human mouth. Eur J Oral Sci 1997 Oct; 105 (5 pt 2): 534-7 19. Yaegaki K, Sanada K.: Volatile sulfur compounds in mouth air from clinically healthy subjects and patients with periodontal disease. J Periodont Res 1992; 27:223-238 20. De Boever EH, Loesche WJ. The tongue microflora and tongue surface characteristics contribute to oral malodor In: van Steenberge D. Rosenberg M (eds), Bad Breath: A Multidisciplinary Approach. Leuven Belgium: Leuven University Press, 1996:111-121 21. Kozlovsky A, Gordon D, Gelernter I, Loesche WJ, Rosenberg M Correlation between BANA test and oral malodor parameters. J Dent Res 1994 May; 73(5): 1036-42 22. De Boever EH, Loesche WJ, Assessing the contribution of anaerobic microflora of the tongue to oral malodor. JADA 1995; 126:1384-93 23. Kleinberg I, Codipilly M, and the biological basis of oral malodor formation In Rosenberg M. (Ed) Bad Breath: research Perspective. Tel Aviv, Israel Ramot Publishing. Tel Aviv University: 1995:13-39 24. McNamara TF, Alexander JF and Lee M. The role of microorganisms in the production of oral malodor. Oral Surg 1972; 34:41 25. Goldberg S, Kozlovsky A, Gordon D, Gelernter I, Sintov A, Rosenberg M. Cadaverine as a putative component of oral malodor. J Dent Res 1994 Jun; 73(6): 1168-72 26. Rosenberg M, Septon I, Eli I, Bar-Ness A, Gelenter, I Bremer, S Gabbay. Halitosis measurement by an industrial sulfide monitor. J Periodon1991;62: 487-489 27. Rosenberg M, Kulkarni GV, Bosy A, McCulloch CAG: Reproducibility and sensitivity of oral malodor measurements with a portable sulfide monitor. J Dent Res 1991; 11:1436-1440 28. Doty RL, Green PA, Ram C, Yankel SL. Communication of gender from human breath odors: Relationship to perceived intensity and pleasantness. Norm Behav 1982.16:13-22 29. Tonzetich J, Richter VJ. Evaluation of volatile odiferous components of saliva. Arch Oral Biol 1964; 9: 39-45 30. Claus D, Geypens B, Rutgeers P, Ghyselen J, Hoshi K, van Steenberghe D, Ghoos Y. Where gastroenterology and periodontology meet: determination of oral volatile organic compounds using closed-loop trapping and high resolution gas chromatography ion trap detection. In: van Steenberghe D, Rosenberg M, Bad Breath: A multidisciplinary approach. Leuven University Press. Leuven, Belgium 1998, pg17-30 31. Gibson TD, Prosser O, Hulbert JN. et al. Detection and simultaneous identification of microorganisms from headspace samples using an electronic nose. Sensors and Actuators 1970: B44:413-22 32. Neiders M, Brigette Ramos. Operation of bad breath clinics. Quintessence International Proceedings of the third International Conference on Breath Odor 1999; 30(5): 295-301 33. Deems DA, Doty RL, Settle RG, Moore-Gillon V, et al. Smell and taste disorders, a study of 750 patients form University of Pennsylvania Smell and Taste Centre. Arch Otolaryngeal Head and Neck Surg 1991; 117: 519-528 34. Delanghe G, Ghyselen J, Feenstra L, Van Steenberghe D. Experiences in of a Belgium multidisciplinary breath odour clinic In Bad Breath–A multidisciplinary Approach. Leuven, Belgium: Leuven University Press 1996:199-208 35. Johnson PW, Yaegaki K, Tonzetich J. Effect of volatile thiol compounds on protein metabolism by human gingival fibroblast. J Periodont Res 1992; 27: 533-561 36. Rosenberg M. Bad Breath: Diagnosis and treatment. Univ Tor Dent. J. 1990; 3(2): 7-11 37. Ratcliff PA, Johnson JW. The relationship between oral malodor, gingivitis, and Periodontitis. A review. J Periodont Res 1999; 70 (5): 485-489 38. Spielman AL, Bivona P, Rifkin BR. Halitosis: A common oral problem. N.Y. State Dent J 1996 Dec; 63 (10): 36-42 39. Nachnani S, The effects of oral rinses on halitosis. CDA Journal,1997;25:2 40. Mandel I.D. Chemotherapeutic agents for controlling plaque and gingivitis. J. Clin Periodontol 1988; 488 41. Grigor J, Roberts AJ. Reduction in the levels of oral malodor precursors by hydrogen peroxide in vitro and in vivo assessments. J. Clin Dent 1992; 3(4):111-5 42. Moneib NA, El-Said MA, Shibi AM. Correlation between the in vivo and in vitro antimicrobial properties of commercially available mouthwash preparations. J Chemotherapy, 4(50): 276-280, 1992 43. Pitts G, Brogdon C, Hu L, Masurat T, Pianotti R, Schumann P. Mechanism of action of an antiseptic, anti-odor mouthwash. J Dent Res, 1983; 62 (60): 738-742 44. Gagari E, Kabani S. Adverse effects of a mouthwash use. Oral Surg. Oral Med Oral Pathol Radiol Endod.1995; 80(4): 432-439 45. Elmore JG, Horwitz RJ. Oral cancer and mouthwash use: Evaluation of the epidemiological evidence. Otolaryngology-Head and Neck Surgery, 113(30) 253-261 46. Nachnani S, Anson D. Effect of Orasan on periodontitis and oral malodor. (Abstract), J Dent Res 1998; 77(6) 47. Nachnani S. reduction of oral malodor with Zinc containing Chewing gum(abstract). J Dent Res 1999; 78 48. Beiswanger BB, Mallet ME, Jackson RD et al Clinical effects of a 0.12% chlorhexidine rinse as an adjunct to scaling and root planing. J Clin Dent, 3 (2): 33-38,1992 49. Lauri H, Vaahtoniemi LH, Karlqvist K, Altonen M, Raisanen S. Mouth rinsing with chlorhexidine causes a delayed temporary increase in the levels of oral viridans streptococci. Acta Odontol Scand, 1995; 53:226-229 50. Quirynen M; Mongardini C; van Steenberghe D. The effect of 1- stage full mouth disinfection on oral malodor and microbial colonization of the tongue in periodontitis. A pilot study. J Periodontol 1998 Mar; 69(3): 374-82 51. Roldan S, Herrera D, and Sanz M. Clinical and Microbiological Effects of an antimicrobial Mouthrinse in Oral Mouthrinse. Abstract Fourth International Conference on Breath Malodor 1999 52. D van Steenberghe, Avontroodt B, and Vandekerkhove. A comparative Evaluation of a chlorhexidine Spray, a Chlorhexidine Mouthrinse and a Sanguinarine-Zinc Mouthrinse on Morning Breath Odour. (abstract) Fourth International Conference on Breath Malodor 53. Nachnani S, Anson D. Effect of Orasan on periodontitis and oral malodor. (Abstract), J Dent Res 77(6) 1998 54. Gaffar A. Sheri D, Afflitoj, Coleman EJ. The effect of triclosan on mediators of gingival inflammation. J Clin Periodontol, 22 (60) 480-484, 1995 55. Kozlovsky A, Goldberg S, Natour I, Rogatky-Gat A, Gelernter I, Rosenberg M. Efficacy of a two phase oil water mouthrinse in controlling oral malodor, gingivitis and plaque. J Periodontol, 1996; 67(6): 577-578 56. Rosenberg M, Gelentre I, Barki M, Barness. Daylong reduction of oral malodor by two-phase oil water mouthrinse as compared to chlorhexidine and placebo rinses. J Periodontol, 1992; 63 (1); 39-43 57. Grigor J; Roberts AJ. Reduction in the levels of oral malodor precursors by hydrogen peroxide in vitro and in vivo assessments. J. Clin Dent 1992; 3(4): 111-5 58. Ogura T; Urade M, Matsuya T. Prevention of malodor from intraoral gauze with the topical use of Clindamycin. Oral Surg Oral Med Oral Pathol 1992 Jul; 74 910: 58-62 59. Rosenberg M, Barkim, Goldberg S, Levitan et al. Oral reduction By Breathanol. (abstract) Fourth International conference on Bad Breath. UCLA 1999 60. Greenstein RB, Goldberg S, Marku-Cohen S, Stere N, Rosenberg M. Reduction of oral malodor by oxidizing lozenges. J Periodontal 1997 Dec; 68 (12): 1176-81 1. Mouth and tongue sources 2. Nasal, nasopharyngeal, sinus, and oropharyngeal sources 3. Xerostomia-induced oral malodor 4. Primary lower respiratory tract and lung sources 5. Systemic disease-induced malodor 6. Gastrointestinal diseases and disorders-induced malodor 7. Odiferous ingested foods, fluids, and medications 1. Self-monitoring oral malodor tests 2. Spousal and friend/confidante feedback 3. Spoon Test 4. Home microbial testing 5. Wrist-lick test 6. In-office oral malodor testing 7. Odor Judges 8. Microbial and fungal testing 9. Salivary incubation test 10. Artificial noses including the Halimeter® 1. Local chemical/antibacterial methods 2. Systemic antibacterial methods 3. Mechanical debridement of the tongue 4. Salivary stimulation and/or substitutes 5. Nasal mucous control methods 6. Avoidance of foods, fluids and medications 7. Correction of anatomic abnormalities 8. Medical management of systemic diseases
<urn:uuid:b239cfa3-5ef0-41c9-b756-5ecc8a12c96d>
seed
A heart attack occurs when the flow of blood to the heart is blocked, most often by a build-up of fat, cholesterol and other substances, which form a plaque in the arteries that feed the heart (coronary arteries). The interrupted blood flow can damage or destroy part of the heart muscle. A heart attack, also called a myocardial infarction, can be fatal, but treatment has improved dramatically over the years. It's crucial to call 911 or emergency medical help if you think you might be having a heart attack. Common heart attack signs and symptoms include: - Pressure, tightness, pain, or a squeezing or aching sensation in your chest or arms that may spread to your neck, jaw or back - Nausea, indigestion, heartburn or abdominal pain - Shortness of breath - Cold sweat - Lightheadedness or sudden dizziness Heart attack symptoms vary Not all people who have heart attacks have the same symptoms or have the same severity of symptoms. Some people have mild pain; others have more severe pain. Some people have no symptoms, while for others, the first sign may be sudden cardiac arrest. However, the more signs and symptoms you have, the greater the likelihood you're having a heart attack. Some heart attacks strike suddenly, but many people have warning signs and symptoms hours, days or weeks in advance. The earliest warning may be recurrent chest pain (angina) that's triggered by exertion and relieved by rest. Angina is caused by a temporary decrease in blood flow to the heart. A heart attack differs from a condition in which your heart suddenly stops (sudden cardiac arrest, which occurs when an electrical disturbance disrupts your heart's pumping action and causes blood to stop flowing to the rest of your body). A heart attack can cause cardiac arrest, but it's not the only cause. When to see a doctor Act immediately. Some people wait too long because they don't recognize the important signs and symptoms. Take these steps: Call for emergency medical help. If you suspect you're having a heart attack, don't hesitate. Immediately call 911 or your local emergency number. If you don't have access to emergency medical services, have someone drive you to the nearest hospital. Drive yourself only if there are no other options. Because your condition can worsen, driving yourself puts you and others at risk. - Take nitroglycerin, if prescribed to you by a doctor. Take it as instructed while awaiting emergency help. Take aspirin, if recommended. Taking aspirin during a heart attack could reduce heart damage by helping to keep your blood from clotting. Aspirin can interact with other medications, however, so don't take an aspirin unless your doctor or emergency medical personnel recommend it. Don't delay calling 911 to take an aspirin. Call for emergency help first. What to do if you see someone having a heart attack If you encounter someone who is unconscious, first call for emergency medical help. Then begin CPR to keep blood flowing. Push hard and fast on the person's chest — about 100 compressions a minute. It's not necessary to check the person's airway or deliver rescue breaths unless you've been trained in CPR. A heart attack occurs when one or more of your coronary arteries become blocked. Over time, a coronary artery can narrow from the buildup of various substances, including cholesterol (atherosclerosis). This condition, known as coronary artery disease, causes most heart attacks. During a heart attack, one of these plaques can rupture and spill cholesterol and other substances into the bloodstream. A blood clot forms at the site of the rupture. If large enough, the clot can completely block the flow of blood through the coronary artery. Another cause of a heart attack is a spasm of a coronary artery that shuts down blood flow to part of the heart muscle. Use of tobacco and of illicit drugs, such as cocaine, can cause a life-threatening spasm. A heart attack can also occur due to a tear in the heart artery (spontaneous coronary artery dissection). Certain factors contribute to the unwanted buildup of fatty deposits (atherosclerosis) that narrows arteries throughout your body. You can improve or eliminate many of these risk factors to reduce your chances of having a first or subsequent heart attack. Heart attack risk factors include: - Age. Men age 45 or older and women age 55 or older are more likely to have a heart attack than are younger men and women. - Tobacco. Smoking and long-term exposure to secondhand smoke increase the risk of a heart attack. - High blood pressure. Over time, high blood pressure can damage arteries that feed your heart by accelerating atherosclerosis. High blood pressure that occurs with obesity, smoking, high cholesterol or diabetes increases your risk even more. - High blood cholesterol or triglyceride levels. A high level of low-density lipoprotein (LDL) cholesterol (the "bad" cholesterol) is most likely to narrow arteries. A high level of triglycerides, a type of blood fat related to your diet, also ups your risk of heart attack. However, a high level of high-density lipoprotein (HDL) cholesterol (the "good" cholesterol) lowers your risk of heart attack. - Diabetes. Insulin, a hormone secreted by your pancreas, allows your body to use glucose, a form of sugar. Having diabetes — not producing enough insulin or not responding to insulin properly — causes your body's blood sugar levels to rise. Diabetes, especially uncontrolled, increases your risk of a heart attack. - Family history of heart attack. If your siblings, parents or grandparents have had early heart attacks (by age 55 for male relatives and by age 65 for female relatives), you may be at increased risk. - Lack of physical activity. An inactive lifestyle contributes to high blood cholesterol levels and obesity. People who get regular aerobic exercise have better cardiovascular fitness, which decreases their overall risk of heart attack. Exercise is also beneficial in lowering high blood pressure. - Obesity. Obesity is associated with high blood cholesterol levels, high triglyceride levels, high blood pressure and diabetes. Losing just 10 percent of your body weight can lower this risk, however. - Stress. You may respond to stress in ways that can increase your risk of a heart attack. - Illegal drug use. Using stimulant drugs, such as cocaine or amphetamines, can trigger a spasm of your coronary arteries that can cause a heart attack. - A history of preeclampsia. This condition causes high blood pressure during pregnancy and increases the lifetime risk of heart disease. - A history of an autoimmune condition, such as rheumatoid arthritis or lupus. Conditions such as rheumatoid arthritis, lupus and other autoimmune conditions can increase your risk of having a heart attack. Heart attack complications are often related to the damage done to your heart during a heart attack. This damage can lead to the following conditions: - Abnormal heart rhythms (arrhythmias). If your heart muscle is damaged from a heart attack, electrical "short circuits" can develop, resulting in abnormal heart rhythms, some of which can be serious, even fatal. - Heart failure. The amount of damaged tissue in your heart may be so great that the remaining heart muscle can't do an adequate job of pumping blood out of your heart. Heart failure may be a temporary problem that goes away after your heart, which has been stunned by a heart attack, recovers. However, it can also be a chronic condition resulting from extensive and permanent damage to your heart following your heart attack. - Heart rupture. Areas of heart muscle weakened by a heart attack can rupture, leaving a hole in part of the heart. This rupture is often fatal. - Valve problems. Heart valves damaged during a heart attack may develop severe, life-threatening leakage problems. A heart attack usually is diagnosed in an emergency setting. However, if you're concerned about your risk of heart attack, see your doctor to check your risk factors and talk about prevention. If your risk is high, you may be referred to a heart specialist (cardiologist). Here's some information to help you prepare for your appointment. What you can do - Be aware of pre-appointment restrictions. When you make the appointment, ask if there's anything you need to do in advance, such as restrict your diet. For a cholesterol test, for example, you may need to fast beforehand. - Write down your symptoms, including any that seem unrelated to coronary artery disease. - Write down key personal information, including a family history of heart disease, stroke, high blood pressure or diabetes, and recent major stresses or recent life changes. - Make a list of medications, vitamins and supplements you're taking. - Take someone along, if possible. Someone who accompanies you may remember something you miss or forget. - Be prepared to discuss your diet and exercise habits. If you don't follow a diet or exercise routine, be ready to talk to your doctor about challenges you might face in getting started. - Write down questions to ask your doctor. Preparing a list of questions can help you make the most of your time with your doctor. Some basic questions to ask your doctor about heart attack prevention include: - What tests do I need to determine my current heart health? - What foods should I eat or avoid? - What's an appropriate level of physical activity? - How often should I be screened for heart disease? - I have other health conditions. How can I best manage these conditions together? - Are there brochures or other printed material that I can have? - What websites do you recommend? Don't hesitate to ask other questions, as well. What to expect from your doctor Your doctor is likely to ask you a number of questions, including: - Have you had symptoms of heart disease, such as chest pain or shortness of breath? If so, when did they begin? - Do these symptoms persist or come and go? - How severe are your symptoms? - What, if anything, seems to improve your symptoms? If you have chest pain, does it improve with rest? - What, if anything, worsens your symptoms? If you have chest pain, does strenuous activity make it worse? - Do you have a family history of heart disease or heart attacks? - Have you been diagnosed with high blood pressure, diabetes or high cholesterol? What you can do in the meantime It's never too early to make healthy lifestyle changes, such as quitting smoking, eating healthy foods and becoming more physically active. These are primary lines of defense against having a heart attack. Ideally, your doctor should screen you during regular physical exams for risk factors that can lead to a heart attack. If you're in an emergency setting for symptoms of a heart attack, you'll be asked to describe your symptoms and have your blood pressure, pulse and temperature checked. You'll be hooked up to a heart monitor and will almost immediately have tests to see if you're having a heart attack. Tests will help check if your signs and symptoms, such as chest pain, indicate a heart attack or another condition. These tests include: - Electrocardiogram (ECG). This first test done to diagnose a heart attack records the electrical activity of your heart via electrodes attached to your skin. Impulses are recorded as waves displayed on a monitor or printed on paper. Because injured heart muscle doesn't conduct electrical impulses normally, the ECG may show that a heart attack has occurred or is in progress. - Blood tests. Certain heart enzymes slowly leak out into your blood if your heart has been damaged by a heart attack. Emergency room doctors will take samples of your blood to test for the presence of these enzymes. If you've had a heart attack or one is occurring, doctors will take immediate steps to treat your condition. You may also undergo these additional tests: - Chest X-ray. An X-ray image of your chest allows your doctor to check the size of your heart and its blood vessels and to look for fluid in your lungs. - Echocardiogram. During this test, sound waves directed at your heart from a wandlike device (transducer) held on your chest bounce off your heart and are processed electronically to provide video images of your heart. An echocardiogram can help identify whether an area of your heart has been damaged by a heart attack and isn't pumping normally or at peak capacity. - Coronary catheterization (angiogram). A liquid dye is injected into the arteries of your heart through a long, thin tube (catheter) that's fed through an artery, usually in your leg or groin, to the arteries in your heart. The dye makes the arteries visible on X-ray, revealing areas of blockage. Exercise stress test. In the days or weeks after your heart attack, you may also undergo a stress test. Stress tests measure how your heart and blood vessels respond to exertion. You may walk on a treadmill or pedal a stationary bike while attached to an ECG machine. Or you may receive a drug intravenously that stimulates your heart similar to exercise. Your doctor may also order a nuclear stress test, which is similar to an exercise stress test, but uses an injected dye and special imaging techniques to produce detailed images of your heart while you're exercising. These tests can help determine your long-term treatment. Cardiac computerized tomography (CT) or magnetic resonance imaging (MRI). These tests can be used to diagnose heart problems, including the extent of damage from heart attacks. In a cardiac CT scan, you lie on a table inside a doughnut-shaped machine. An X-ray tube inside the machine rotates around your body and collects images of your heart and chest. In a cardiac MRI, you lie on a table inside a long tubelike machine that produces a magnetic field. The magnetic field aligns atomic particles in some of your cells. When radio waves are broadcast toward these aligned particles, they produce signals that vary according to the type of tissue they are. The signals create images of your heart. Heart attack treatment at a hospital With each passing minute after a heart attack, more heart tissue loses oxygen and deteriorates or dies. The main way to prevent heart damage is to restore blood flow quickly. Medications given to treat a heart attack include: - Aspirin. The 911 operator may instruct you to take aspirin, or emergency medical personnel may give you aspirin immediately. Aspirin reduces blood clotting, thus helping maintain blood flow through a narrowed artery. - Thrombolytics. These drugs, also called clotbusters, help dissolve a blood clot that's blocking blood flow to your heart. The earlier you receive a thrombolytic drug after a heart attack, the greater the chance you'll survive and with less heart damage. - Antiplatelet agents. Emergency room doctors may give you other drugs to help prevent new clots and keep existing clots from getting larger. These include medications, such as clopidogrel (Plavix) and others, called platelet aggregation inhibitors. - Other blood-thinning medications. You'll likely be given other medications, such as heparin, to make your blood less "sticky" and less likely to form clots. Heparin is given intravenously or by an injection under your skin. - Pain relievers. You may receive a pain reliever, such as morphine, to ease your discomfort. - Nitroglycerin. This medication, used to treat chest pain (angina), can help improve blood flow to the heart by widening (dilating) the blood vessels. - Beta blockers. These medications help relax your heart muscle, slow your heartbeat and decrease blood pressure, making your heart's job easier. Beta blockers can limit the amount of heart muscle damage and prevent future heart attacks. - ACE inhibitors. These drugs lower blood pressure and reduce stress on the heart. Surgical and other procedures In addition to medications, you may undergo one of the following procedures to treat your heart attack: Coronary angioplasty and stenting. Doctors insert a long, thin tube (catheter) that's passed through an artery, usually in your leg or groin, to a blocked artery in your heart. If you've had a heart attack, this procedure is often done immediately after a cardiac catheterization, a procedure used to locate blockages. This catheter is equipped with a special balloon that, once in position, is briefly inflated to open a blocked coronary artery. A metal mesh stent may be inserted into the artery to keep it open long term, restoring blood flow to the heart. Depending on your condition, your doctor may opt to place a stent coated with a slow-releasing medication to help keep your artery open. Coronary artery bypass surgery. In some cases, doctors may perform emergency bypass surgery at the time of a heart attack. If possible, your doctor may suggest that you have bypass surgery after your heart has had time — about three to seven days — to recover from your heart attack. Bypass surgery involves sewing veins or arteries in place beyond a blocked or narrowed coronary artery, allowing blood flow to the heart to bypass the narrowed section. Once blood flow to your heart is restored and your condition is stable, you're likely to remain in the hospital for several days. Your lifestyle affects your heart health. The following steps can help you not only prevent but also recover from a heart attack: - Avoid smoke. The most important thing you can do to improve your heart's health is to not smoke. Also, avoid being around secondhand smoke. If you need to quit, ask your doctor for help. - Control your blood pressure and cholesterol levels. If one or both of these is high, your doctor can prescribe changes to your diet and medications. Ask your doctor how often you need to have your blood pressure and cholesterol levels monitored. - Get regular medical checkups. Some of the major risk factors for heart attack — high blood cholesterol, high blood pressure and diabetes — cause no symptoms early on. Your doctor can perform tests to check for these conditions and help you manage them, if necessary. - Exercise regularly. Regular exercise helps improve heart muscle function after a heart attack and helps prevent a heart attack by helping you to control your weight, diabetes, cholesterol and blood pressure. Exercise needn't be vigorous. Walking 30 minutes a day, five days a week can improve your health. - Maintain a healthy weight. Excess weight strains your heart and can contribute to high cholesterol, high blood pressure and diabetes. - Eat a heart-healthy diet. Saturated fat, trans fats and cholesterol in your diet can narrow arteries to your heart, and too much salt can raise blood pressure. Eat a heart-healthy diet that includes lean proteins, such as fish and beans, plenty of fruits and vegetables and whole grains. - Manage diabetes. High blood sugar is damaging to your heart. Regular exercise, eating well and losing weight all help to keep blood sugar levels at more-desirable levels. Many people also need medication to manage their diabetes. - Control stress. Reduce stress in your day-to-day activities. Rethink workaholic habits and find healthy ways to minimize or deal with stressful events in your life. - If you drink alcohol, do so in moderation. For healthy adults, that means up to one drink a day for women and men older than age 65, and up to two drinks a day for men age 65 and younger. It's never too late to take steps to prevent a heart attack — even if you've already had one. Here are ways to prevent a heart attack. - Medications. Taking medications can reduce your risk of a subsequent heart attack and help your damaged heart function better. Continue to take what your doctor prescribes, and ask your doctor how often you need to be monitored. - Lifestyle factors. You know the drill: Maintain a healthy weight with a heart-healthy diet, don't smoke, exercise regularly, manage stress and control conditions that can lead to heart attack, such as high blood pressure, high cholesterol and diabetes. Having a heart attack is scary. How will this affect your life? Will you be able to return to work or resume activities you enjoy? Will it happen again? Here are some suggestions to help you cope: Deal with your emotions. Fear, anger, guilt and depression are all common after a heart attack. Discussing them with your doctor, a family member or a friend may help. Or consider talking to a mental health provider or joining a support group. It's important to mention signs or symptoms of depression to your doctor. Cardiac rehabilitation programs can be effective in preventing or treating depression after a heart attack. - Attend cardiac rehabilitation. Many hospitals offer programs that may start while you're in the hospital and, depending on the severity of your attack, continue for weeks to months after you return home. Cardiac rehabilitation programs generally focus on four main areas — medications, lifestyle changes, emotional issues and a gradual return to your normal activities. Sex after a heart attack Some people worry about having sex after a heart attack, but most people can safely return to sexual activity after recovering from a heart attack. When you can resume sexual activity will depend on your physical comfort, psychological readiness and previous sexual activity. Ask your doctor when it's safe to resume sexual activity. Some heart medications may affect sexual function. If you're having problems with sexual dysfunction, talk to your doctor. Nov. 15, 2014 - What is a heart attack? National Heart, Lung, and Blood Institute. http://www.nhlbi.nih.gov/health/health-topics/topics/heartattack/printall-index.html. Accessed Sept. 12, 2014. - Reeder GS, et al. Criteria for the diagnosis of acute myocardial infarction. http://www.uptodate.com/home. Accessed Sept. 12, 2014. - Heart attack answers and questions. American Heart Association. http://www.heart.org/HEARTORG/Conditions/HeartAttack/Heart-Attack_UCM_001092_SubHomePage.jsp. Accessed Sept. 12, 2014. - 2014 Hands-Only CPR fact sheet. American Heart Association. http://www.heart.org/HEARTORG/CPRAndECC/HandsOnlyCPR/Hands-Only-CPR_UCM_440559_SubHomePage.jsp. Accessed Sept. 14, 2014. - Lifestyle changes for heart attack prevention. American Heart Association. http://www.heart.org/HEARTORG/Conditions/HeartAttack/PreventionTreatmentofHeartAttack/Lifestyle-Changes-for-Heart-Attack-Prevention_UCM_303934_Article.jsp. Accessed Sept. 14, 2014. - Crawford MH, ed. Current Diagnosis & Treatment: Cardiology. 3rd ed. New York, N.Y.: The McGraw-Hill Companies; 2009. http://www.accessmedicine.com/resourceTOC.aspx?resourceID=8. Accessed Sept. 14, 2014. - Pahlm O, et al. Multimodal Cardiovascular Imaging: Principles and Clinical Applications. New York, NY: McGraw-Hill; 2011. http://accessmedicine.mhmedical.com/content.aspx?bookid=382&Sectionid=40663674. Accessed Sept. 14, 2014.
<urn:uuid:f748b9b2-b6f8-436c-aea0-2a268868167a>
seed
For years, researchers have been working to discover which cellular processes allow humans to learn and store memories, and how these processes are compromised by diseases such as schizophrenia and Alzheimer’s. Researchers at NIH say they believe they have uncovered one piece to this puzzle. Neurons in the brain communicate with one another through the use of neurotransmitters — chemicals which stimulate electrical signals in neighboring neurons. Studies in mice have shown that the timing of when the neurotransmitter acetylcholine is released into the hippocampus may play a key role in regulating synaptic strength. Researchers Jerrel Yakel, PhD, and Zhenglin Gu, PhD — investigators in the National Institute of Environmental Health Sciences (NIEHS) — developed a study group of mice whose neurons produced a light-sensitive protein. They were then able to use a laser to stimulate these neurons to release acetylcholine. They were able to demonstrate that, when the neurons were stimulated to release acetylcholine at just the right time in the hippocampus, they could induce a cellular change in synapses that use glutamate. Glutamate, another neurotransmitter, has been found to be key in learning and memory and in long-term potentiation — a long-lasting enhancement in signal transmission between neurons. The NIEHS research team found that timing was crucial and that even a few hundredths of a second in the timing of the acetylcholine release could affect how adjacent neurons responded. A difference as small as 20 milliseconds in the timing of the laser could result in very different reactions. According to Yakel, these findings might represent a first step in studying disorders that affect learning and memory, such as schizophrenia and Alzheimer’s. People with Alzheimer’s have been shown to have low levels of acetylcholine in the cerebral cortex. A similar deficit has been shown among patients with schizophrenia, with acetylcholine depletion being linked to visual hallucination in schizophrenic patients. 1: Gu Z, Yakel JL. Timing-dependent septal cholinergic induction of dynamic hippocampal synaptic plasticity.Neuron. 2011 Jul 14;71(1):155-65. PubMed PMID: 21745645; PubMed Central PMCID: PMC3134790.
<urn:uuid:5302fc4c-fe87-4903-bbc4-5c46e928a16c>
seed
With a featured publication in the Aug. 7 issue of Science, Montana State University researchers have made a significant contribution to the understanding of a new field of DNA research, with the acronym CRISPR, that holds enormous promise for fighting infectious diseases and genetic disorders. The MSU-led research provides the first detailed blueprint of a multi-subunit "molecular machinery" that bacteria use to detect and destroy invading viruses. "We generally think of bacteria as making us sick, but rarely do we consider what happens when the bacteria themselves get sick. Viruses that infect bacteria are the most abundant biological agents on the planet, outnumbering their bacterial hosts 10 to 1," said Blake Wiedenheft, senior author of the paper and assistant professor in MSU's Department of Microbiology and Immunology. "Bacteria have evolved sophisticated immune systems to fend off viruses. We now have a precise molecular blueprint of a surveillance machine that is critical for viral defense," Wiedenheft said. These immune systems rely on a repetitive piece of DNA in the bacterial genome called a CRISPR. CRISPR is an acronym that stands for Clustered Regularly Interspaced Short Palindromic Repeats. These repetitive elements maintain a molecular memory of viral infection by inserting short segments of invading viral DNA into the DNA of the "defending" bacteria. This information is then used to guide the bacteria's immune system to destroy the invading viral DNA. The molecular blueprint of the surveillance complex was determined by a team of scientists in Wiedenheft's lab at MSU using a technique called X-ray crystallography. Ryan Jackson, a postdoctoral fellow in the Wiedenheft lab, collected X-ray diffraction data from synchrotron radiation sources located in Chicago, Berkeley, and Stanford. "Interpreting these X-ray diffraction patterns is a complex mathematical problem and Ryan is one of a few people in the world capable of interpreting this data," Wiedenheft said. To help determine the structure, Wiedenheft sent Jackson to Duke University for a biannual meeting on X-ray crystallography. At the meeting, Jackson sat between "two of the greatest minds in the field of X-ray crystallography"– Randy Read from the University of Cambridge and Thomas Terwilliger from Los Alamos National Lab -- whose expertise facilitated the computational analysis of the data, which was critical for determining the structure. "The structure of this biological machine is conceptually similar to an engineer's blueprint, and it explains how each of the parts in this complex assemble into a functional complex that efficiently identifies viral DNA when it enters the cell," Wiedenheft said. "This surveillance machine consists of 12 different parts and each part of the machine has a distinct job. If we're missing one part of the machine, it doesn't work." Understanding how these machines work is leading to unanticipated new innovations in medicine and biotechnology and agriculture. These CRISPR-associated machines are programmable nucleases (molecular scissors) that are now being exploited for precisely altering the DNA sequence of almost any cell type of interest. "In nature these immune system evolved to protect bacteria form viruses, but we are now repurposing these systems to cut viral DNA out of human cells infected with HIV. You can think of this as a form of DNA surgery. Therapies that were unimaginable may be possible in the future," Wiedenheft said. "We know the genetic basis for many plant, animal, and human diseases, and these CRISRP-associated nucleases are now being used in research settings to surgically remove or repair defective genes," Wiedenheft said. "This technology is revolutionizing how molecular genetics is done and MSU has a large group of researchers that are at the cutting edge of this technological development." Wiedenheft, a native of Fort Peck, Mont., was recently recruited by MSU from UC-Berkeley. Wiedenheft explained that the research environment, colleagues and support at MSU is second to none and the opportunity to move back to this great state was a "no-brainer." In addition to Jackson, Read, Terwilliger and Wiedenheft, MSU co-authors on the Science paper are research associate Sarah Golden, graduate student Paul van Erp and undergraduate Joshua Carter. Additional collaborators included co-authors Edze Westra, Stan Brouns and John van der Oost from Wageningen University in the Netherlands. Research in the Wiedenheft lab is supported by the National Institutes of Health, the National Science Foundation EPSCoR, the M.J. Murdock Charitable Trust, and the MSU Agricultural Experimental Station. Atomic coordinates for the Cascade structure have been deposited into the public repository (Protein Data Bank) under access code 4TVX. Evelyn Boswell | Eurek Alert! New Technique Maps Elusive Chemical Markers on Proteins 03.07.2015 | Salk Institute for Biological Studies New approach to targeted cancer therapy 03.07.2015 | CECAD - Cluster of Excellence at the University of Cologne Wind turbines could be installed under some of the biggest bridges on the road network to produce electricity. So it is confirmed by calculations carried out by a European researchers team, that have taken a viaduct in the Canary Islands as a reference. This concept could be applied in heavily built-up territories or natural areas with new constructions limitations. The Juncal Viaduct, in Gran Canaria, has served as a reference for Spanish and British researchers to verify that the wind blowing between the pillars on this... New technique combines electron microscopy and synchrotron X-rays to track chemical reactions under real operating conditions A new technique pioneered at the U.S. Department of Energy's Brookhaven National Laboratory reveals atomic-scale changes during catalytic reactions in real... Think of an object made of iron: An I-beam, a car frame, a nail. Now imagine that half of the iron in that object owes its existence to bacteria living two and a half billion years ago. Think of an object made of iron: An I-beam, a car frame, a nail. Now imagine that half of the iron in that object owes its existence to bacteria living two and... A team of scientists including PhD student Friedrich Schuler from the Laboratory of MEMS Applications at the Department of Microsystems Engineering (IMTEK) of... The three-year clinical trial results of the retinal implant popularly known as the "bionic eye," have proven the long-term efficacy, safety and reliability of... 25.06.2015 | Event News 16.06.2015 | Event News 11.06.2015 | Event News 03.07.2015 | Press release 03.07.2015 | Agricultural and Forestry Science 03.07.2015 | Health and Medicine
<urn:uuid:9768d550-f323-42f3-9f1c-4b897f874886>
seed
Does the most common vaginal infection relate to infertility, or can it put an existing pregnancy at risk? Here's what you need to know. Read more » WHO IT’S OFFERED TO: All women. WHEN IT’S OFFERED: At about 20 weeks (also offered as part of the first-trimester screening). WHAT IT SCREENS FOR: A wide variety of problems. “An anatomical survey of the entire fetus is typically conducted,” explains midwife Barbara McFarlin. HOW IT WORKS: Using a transducer placed over the abdomen, sound waves create pictures of the fetus. HOW EFFECTIVE IT IS: McFarlin says it detects approximately 50 percent of heart defects; O’Brien says it’s excellent at detecting NTDs. However, O’Brien adds: “Fifty percent of babies born with Down syndrome had normal ultrasound results.” WHAT IF …? If ultrasound does detect a potential problem, you’ll need to decide if you want an amnio to determine whether it could be part of a chromosomal or genetic syndrome. This test is done at weeks 26 to 28 to check for gestational diabetes (high blood sugar during pregnancy), which increases the risk of having a too-large baby and needing a C-section. group B streptococcus At 36 weeks or so, you’ll be tested for the presence of potentially dangerous bacteria that could be passed to the baby during delivery. It involves a painless swab of your rectum and vagina. Issues to consider Perhaps the biggest issue surrounding prenatal testing is what you would do if a problem were detected. “Women think they have to terminate, but that’s not true,” McFarlin says. “You can continue the pregnancy and use that time to educate yourself about the baby’s condition, talk to other families dealing with the same issue, maybe even prepare for the delivery.” For instance, if the baby will need surgery for a heart malformation at birth, you can find a hospital with that capability and deliver there. Testing involves many such big decisions—ones that your doctor may not be able to discuss at length. “A discussion about prenatal testing can take up to an hour, time that many doctors just don’t have,” O’Brien says, adding that knowing all the details of the various tests may not even be within their area of expertise. As a result, many doctors routinely refer patients to a genetic counselor, who specializes in this area. One important note: If a diagnostic test indicates a problem, get a second opinion from a perinatal (high-risk) specialist.
<urn:uuid:34a5954f-5ef7-419c-9457-79d8f466d887>
seed
Clinical gene therapy may be one step closer, thanks to a new twist on an old class of molecules. A group of University of Illinois researchers, led by professors Jianjun Cheng and Fei Wang, have demonstrated that short spiral-shaped proteins can efficiently deliver DNA segments to cells. The team published its work in the journal Angewandte Chemie. "The main idea is these are new materials that could potentially be used for clinical gene therapy," said Cheng, a professor of materials science and engineering, of chemistry and of bioengineering. Researchers have been exploring two main pathways for gene delivery: modified viruses and nonviral agents such as synthetic polymers or lipids. The challenge has been to address both toxicity and efficiency. Polypeptides, or short protein chains, are attractive materials because they are biocompatible, fine-tunable and small. "There are very good in vitrotransfection agents available, but we cannot use them in vivo because of their toxicity or because some of the complexes are too large," Cheng said. "Using our polypeptides, we can control the size down to the 200 nanometer range, which makes it a very interesting delivery system for in vivo applications." A polypeptide called poly-L-lysine (PLL) was an early contender in gene delivery studies. PLL has positively charged side chains molecular structures that stem from each amino acid link in the polypeptide chain so it is soluble in the watery cellular environment. However, PLL gradually fell into disuse because of its limited ability to deliver genes to the inside of cells, a process called transfection, and its high toxicity. Cheng postulated that PLL's low efficiency could be a function of its globular shape, as polypeptides with charged side chains tend to adopt a random coil structure, instead of a more orderly spiral helix. "We never studied the connections of conformation with transfection efficiency, because we were never able to synthetically make materials containing both cationic charge and a high percentage of helical structures," Cheng said. "This paper demonstrated for the first time that helicity has a huge impact on transfection efficiencies." Earlier this year, Cheng's group developed a method of making helical polypeptides with positively charged side chains. To test whether a helical polypeptide could be an efficient gene delivery agent, the group assembled a library of 31 helical polypeptides that are stable over a broad pH range and can bond to DNA for delivery. Most of them outperformed PLL and a few outstripped a leading commercial agent called polyethyleneimine (PEI), notorious for its toxicity although it is highly efficient. The helical molecules even worked on some of the hardest cells to transfect: stem cells and fibroblast cells. "People kind of gave up on polypeptide-based materials for gene deliveries because PLL had low efficiency and high toxicity," Cheng said. "The polypeptide that we designed, synthesized and used in this study has very high efficiency and also well-controlled toxicities. With a modified helical polypeptide, we demonstrated that we can outperform many commercial agents." The polypeptides Cheng and his co-workers developed can adopt helical shapes because the side chains are longer, so that the positive charges do not interfere with the protein's winding. The positive charges readily bind to negatively charged DNA, forming complexes that are internalized into cellular compartments called endosomes. The helical structures rupture the endosomal membranes, letting the DNA escape into the cell. To confirm that the spiral polypeptide shape is the key to transfection, the researchers then synthesized two batches of the most efficient polypeptide: one batch with a helical shape, one with the usual random coil. The helical polypeptide far exceeded the random-coil polypeptide in both efficiency and stability. "This demonstrates that the helicity is very important, because the polymer has exactly the same chemical makeup; the only difference is the structure," said Cheng, who also is associated with the Institute for Genomic Biology and the Beckman Institute for Advanced Science and Technology, both at the U. of I. Next, the researchers plan to further explore their helical polypeptides' properties, especially their cell-penetrating abilities. They hope to control sequence and structure with precision for specific applications, including gene delivery, drug delivery, cell-membrane penetration and antimicrobial action. Explore further: New computer model sets new precedent in drug discovery More information: The paper, "Reactive and Bioactive Cationic α-Helical Polypeptide Template for Nonviral Gene Delivery," is available online on Angewandte Chemie.
<urn:uuid:769f2438-6658-44e8-a257-40b79676989a>
seed
Acupuncture has been used in East-Asian medicine for thousands of years to treat pain, possibly by activating the body's natural painkillers. But how it works at the cellular level is largely unknown. Using brain imaging, a University of Michigan study is the first to provide evidence that traditional Chinese acupuncture affects the brain's long-term ability to regulate pain. The results appear online ahead of print in the September Journal of NeuroImage. In the study, researchers at the U-M Chronic Pain and Fatigue Research Center showed acupuncture increased the binding availability of mu-opoid receptors (MOR) in regions of the brain that process and dampen pain signals - specifically the cingulate, insula, caudate, thalamus and amygdala. Opioid painkillers, such as morphine, codeine and other medications, are thought to work by binding to these opioid receptors in the brain and spinal cord. "The increased binding availability of these receptors was associated with reductions in pain," says Richard E. Harris, Ph.D., researcher at the U-M Chronic Pain and Fatigue Research Center and a research assistant professor of anesthesiology at the U-M Medical School. One implication of this research is that patients with chronic pain treated with acupuncture might be more responsive to opioid medications since the receptors seem to have more binding availability, Harris says. These findings could spur a new direction in the field of acupuncture research following recent controversy over large studies showing that sham acupuncture is as effective as real acupuncture in reducing chronic pain. "Interestingly both acupuncture and sham acupuncture groups had similar reductions in clinical pain," Harris says. "But the mechanisms leading to pain relief are distinctly different." The study participants included 20 women who had been diagnosed with fibromyalgia, a chronic pain condition, for at least a year, and experienced pain at least 50 percent of the time. During the study they agreed not to take any new medications for their fibromyalgia pain. Patients had position emission tomography, or PET, scans of the brain during the first treatment and then repeated a month later after the eighth treatment. More information: Journal of NeuroImage, Vol. 5, No. 83, 2009 Source: University of Michigan (news : web) Explore further: Study links enzyme to autistic behaviors
<urn:uuid:e0326bb5-0094-4415-ade3-609779d09fe8>
seed
Carnegie Mellon neuroscientists have identified what may be the first known common denominator underlying two types of epilepsy. Their findings offer new hope to people suffering from the disease. It turns out that a disruption in the "BK ion channel" is the link. Not a science major? Ions are charged atoms or molecules, and ion channels regulate the flow of ions across the membrane inside every cell in the body. In the search for new drug therapies, ion channels are a favorite target because they're involved in a wide range of the body's biological processes. Although BK channels have been linked to a rare, familial form of epilepsy, their involvement in other types of seizure disorders has never before been demonstrated. The new findings are published in the June issue of Neurobiology of Disease. The researchers discovered that BK channels become abnormally active after a seizure. This disruption results in the neurons becoming overly excitable, which may be associated with the development of epilepsy. Carnegie Mellon scientists were able to reverse this abnormal excitability using a BK channel antagonist, which returned the post-seizure electrical activity to normal levels. "The fact that the BK channel previously has been linked with familial epilepsy and with generalized seizures in subjects without a genetic predisposition points to a common therapeutic pathway," said Alison Barth, an associate professor of biological sciences at Carnegie Mellon's Mellon College of Science. She added, "We've shown that BK antagonists can be very effective in normalizing aberrant electrical activity in neurons, which suggests that BK channel antagonists might be a new weapon in the arsenal against epilepsy." Epilepsy is a neurological disorder marked by abnormal electrical activity in the brain that leads to recurring seizures. According to the Epilepsy Foundation, no cause can be found in about seven out of 10 people with epilepsy. However, researchers have identified a genetic component in some types of epilepsy. This study establishes, for the first time, a shared component between different types of epilepsy. "Although research has revealed that many types of inherited epilepsy are linked to mutations in different ion channels, there has been little overlap between these ion channels and those channels that are affected by sporadic or acquired forms of epilepsy," Barth said. "BK channels could represent a common pathway activated in familial and sporadic cases of epilepsy."
<urn:uuid:44079b06-9f77-4bdb-bc1a-60b0912249e2>
seed
Assisted reproductive technologies (ART) are procedures used to treat infertility in which both eggs and sperm are manipulated outside the body. ART procedures involve surgically removing eggs from a woman’s ovaries, combining them with sperm in the laboratory, then returning the fertilized eggs or embryos to the woman’s body or donating them to another woman. Today over one percent of all babies born in the United States are conceived using ART procedures such as in vitro fertilization (IVF), egg donation, and surrogacy. IVF is by far the most widely used assisted reproductive technology, accounting for 99 percent of all ART procedures. IVF is a multi-step treatment process. Several weeks prior to the actual procedure, you will take hormonal contraception to suppress your own ovarian function. Following that you will receive one or a combination of fertility drugs, often by injection, to stimulate the production of eggs. During this time, you will have to make several visits to the clinic, where providers will carefully monitor the number and size of the eggs in each ovary. Once the eggs are ready, the actual IVF procedure consists of two major steps: egg retrieval and embryo transfer. During the egg retrieval, you will be sedated while the mature eggs are surgically removed from your ovaries. This typically takes place at your local fertility clinic. Follicles from both your left and right ovaries are retrieved through a process called follicular aspiration. Follicular aspiration involves inserting a hollow needle through the cervix and into the ovaries. The needle is then used to suction out any follicles that may be present in the ovaries. In order to guide the needle into the appropriate area of the ovary, a transvaginal ultrasound will be used. Once the needle is in the proper position, any follicles inside the ovary will be aspirated out. The follicle aspirates will be immediately examined under a microscope to ensure the presence of viable eggs. This is different from the typical menstrual cycle, in which the ovaries process many eggs but only one mature egg is released into the tubes and can be fertilized. After the egg retrieval process you may feel a little tender in your abdomen. You will also feel fatigued as a result of the anesthetic. After several hours of monitoring, you will be allowed to go home. You may notice some light vaginal spotting. You will also receive antibiotics to prevent infection. After the retrieval process, your eggs will be joined with sperm from your partner or a donor in the lab. If the eggs are fertilized, they will be allowed to divide for three to five days, then placed back into your—or a gestational carrier’s—uterus. This is called the embryo transfer. The embryo transfer catheter is loaded with the embryos and passed through the cervical opening up the middle of the uterus. An abdominal ultrasound is used simultaneously to view the catheter tip and ensure its proper placement. When the catheter tip reaches the ideal location, the embryos are released out of the catheter to the lining of the uterus. Again, this is different from the natural conception process, when typically the sperm meets only one egg in the tubes and after fertilization the one embryo falls into the uterus and implants in the uterine lining to establish a singleton pregnancy. Exceptions occur, and in about two percent of all natural pregnancies more than than one embryo grows in the uterus. However, because the norm is to transfer more than one embryo, in IVF more than 30 percent of the pregnancies are multiple. Because multiple pregnancies pose more risks for mothers and babies, most experts now recommend single embryo transfers for most women. For more information, see The Importance of Single Embryo Transfers in IVF. For most women, the embryo transfer procedure feels similar to a Pap test and does not require any sedation or other drugs. You will likely feel no or minimal pain or discomfort. About nine to eleven days after the transfer, a blood pregnancy test can be done. If one or more embryos have successfully implanted into the uterus, hCG hormone will be detectable. If there are more viable embryos than are transferred, families may choose to freeze (cryo-preserve) the extra embryos for future use. In addition to saving thousands of dollars in costs, this decision protects women from having to repeat stressful and potentially risky drug therapies to stimulate ovulation again. With recent technological advances, IVF cycles using frozen embryos have the same chance of success as those using fresh embryos. The success rates of IVF vary greatly, based on many factors, including the quality of the implanted embryos, the skill of the clinic, and, most important, a woman’s age. According to the CDC, on average, a woman younger than 35 who is using her own eggs and fresh embryos has about a 42 percent chance per cycle of getting pregnant and giving birth to a live baby. Women between the ages of 35 and 37 have about a 32 percent chance, women 38 to 40 about a 22 percent chance, and women 41 to 42 about a 12 percent chance. To find out more, see Clinic Statistics and Success Rates. In general, IVF costs about $10,000 to $15,000 per cycle. Insurance coverage for ART is patchy. Many employers, in an effort to contain costs, don’t purchase such benefits for their employees. Some states have laws mandating that employers offer infertility treatment benefits. Medicaid does not cover ART, not even in states where by law employers must offer the benefits. To learn more about benefits you might be eligible for, check with your health care insurance plan. The infertility association Resolve keeps track of the states that mandate coverage of infertility treatment and describes the various laws.
<urn:uuid:63d844c6-5d32-4390-b020-08df05d407fe>
seed
It is best to take in food by mouth whenever possible. Some patients may not be able to take in enough food by mouth because of problems from cancer or cancer treatment. Medicine to increase appetite may be used. A patient who is not able to take in enough food by mouth may be fed using enteral nutrition (through a tube inserted into the stomach or intestines) or parenteral nutrition (infused into the bloodstream). The nutrients are given in liquid formulas that have water, protein, fats, carbohydrates, vitamins, and/or minerals. Nutrition support can improve a patient's quality of life during cancer treatment, but there are harms that should be considered before making the decision to use it. The patient and health care providers should discuss the harms and benefits of each type of nutrition support. (See the Enteral nutrition is giving the patient nutrients in liquid form (formula) through a tube that is placed into the stomach or small intestine. The following types of feeding tubes may be used: The type of formula used is based on the specific needs of the patient. There are formulas for patients who have special health conditions, such as diabetes. Formula may be given through the tube as a constant drip (continuous feeding) or 1 to 2 cups of formula can be given 3 to 6 times a day (bolus feeding). Enteral nutrition is sometimes used when the patient is able to eat small amounts by mouth, but cannot eat enough for health. Nutrients given through a tube feeding add the calories and nutrients needed for health. If enteral nutrition is to be part of the patient's care after leaving the hospital, the patient and caregiver will be trained to do the nutrition support care at home. Parenteral nutrition is used when the patient cannot take food by mouth or by enteral feeding. Parenteral feeding does not use the stomach or intestines to digest food. Nutrients are given to the patient directly into the blood, through a catheter (thin tube) inserted into a vein. These nutrients include proteins, fats, vitamins, and minerals. Parenteral nutrition is used only in patients who need nutrition support for five days or more. A central venous catheter is placed beneath the skin and into a large vein in the upper chest. The catheter is put in place by a surgeon. This type of catheter is used for long-term parenteral feeding. A peripheral venous catheter is placed into a vein in the arm. A peripheral venous catheter is put in place by trained medical staff. This type of catheter is usually used for short-term parenteral feeding. The patient is checked often for infection or bleeding at the place where the catheter enters the body. If parenteral nutrition is to be part of the patient's care after leaving the hospital, the patient and caregiver will be trained to do the nutrition support care at home. Going off parenteral nutrition support needs to be done slowly and is supervised by a medical team. The parenteral feedings are decreased by small amounts over time until they can be stopped, or as the patient is changed over to enteral or oral feeding.
<urn:uuid:a82fd8dd-52e6-4ffa-ac0c-b084cfd00563>
seed
Race Influences Liver Transplants WebMD News Archive Jan. 25, 2002 -- Previous studies have shown that blacks fare worse than other groups after a kidney transplant. Now, researchers have shown the same holds true for liver transplantation. Apparently, blacks, as well as Asians, are significantly more likely than other races to experience organ rejection or to die following the procedure. The reason for this unfortunate discrepancy remains unclear. Paul Thuluvath, MD, of Johns Hopkins University School of Medicine, and colleagues reviewed records for every liver transplant performed in the United States between 1988 and 1996. The information included age, sex, race, blood type, and cause of death for both donors and recipients. They found that at both two and five years after the transplantation, survival rates for blacks and Asians were significantly lower than for whites or Hispanics. Blacks and Asians were also more likely to experience transplant rejection. Even after the researchers took all other known risk factors for transplant failure into account, race still stood out as an independent predictor of poor survival for blacks and Asians. "African Americans and Asians have a worse outcome after liver transplant compared with white Americans and Hispanics," the researchers write. What is happening here, and what can be done about it? There are several possible explanations. First, poor matching of blood and genetic variables between liver donors and black recipients may be to blame. Second, "poor socioeconomic status, and lack of insurance benefits resulting in inadequate post-transplant care," could be a factor. Also, there's the potentially important fact that black patients were an average of seven years younger at the time of transplant, and thus sicker than other patients. The most likely explanation, the researchers write, is "that immunological factors, yet unidentified, might be contributing to chronic rejection. Moreover, most of the currently available [antirejection] drugs were tested in predominantly white Americans and there might be a need to test these drugs more rigorously in minorities." Given "the higher rate of chronic rejection in African Americans and a relatively worse outcome in other minority races merits further examination," they conclude.
<urn:uuid:50d6c67c-6b60-4d85-8cf4-c5671ad1fb65>
seed
What About Mange? By Jenny Blaney Mange was first identified as a pest in pigs 140 years ago. It continues to this day to be a challenge to veterinarians and remains a costly proposition in commercial pig operations. Don’t assume that because your potbelly is a pet in the house that the following information does not apply to your situation. The sarcoptes scabies var. suis is just as content to munch on your potbelly as a commercial pig. Symptoms of mange can be anywhere from sub clinical to very subtle to obvious. Potbellies with sub clinical mange could be regarded as “carriers” with no visible signs of the parasite. These are the most difficult cases to spot. They do not rub or scratch nor do they have any discoloration of the skin. There are no “flakes” or dandruff. There are no lesions. However, when sub clinical potbellies are subjected to stress such as traveling to the vet or a show, a sudden cold or hot change in the weather, a new member added to (or subtracted from) a household, mange can spontaneously erupt. A gilt coming into her first heat can suddenly break out with mange. Symptoms of mange can be very subtle, but left untreated will develop into more obvious signs. Symptoms characteristic of mange infestation are as follows: 1. The pig’s skin is dry and scaly, like “dandruff” (Potbellies are known for their dry skin making it even more difficult to differentiate between mange and a normal dry skin condition.) 2. The pig begins to rub against objects – a black pig will leave white “tracks” on the body where it has rubbed against furniture, etc. 3. Tiny bumps and/or scabs appear just under the surface of the skin most often found behind the ears, under the front legs and on the chest, between the back legs and on the ankles just above the hard hoof. These bumps become more prevalent anywhere the skin is thin or moist or both. These same moist areas take on an orange cast in color, more easily seen on pigs with white skin, but present on black pigs as well. The orange color will wash off only to reappear in two or three days. 4. Ears begin to exude excessive amounts of reddish-brown debris. Ears sometimes have a bad smell. 5. Eyes have the same reddish-brown, crusty matter in the corners and sometimes on the eyelashes. 6.Eyes begin to tear, sometimes to the point of leaving tear “tracks” down the face. The pig looks like he is crying. A pig may have all of the above symptoms without having started to scratch or rub. A pig may rub or scratch a lot with only one or two of the above symptoms. Some pigs just have filthy ears and eyes that “cry” but no other symptoms. As with most syndromes, some pigs seem to have a higher resistance threshold while other pigs are super sensitive to mange infestation. Left untreated, a pig with some or all of the symptoms above will develop a chronic” condition that is classic and easy to recognize. Aside from the stated indicators of mange, there are some additional signs to look for: 1.scaly, scabby, thickened skin 2.coat thin and/or actual hair loss 3. black skin becomes dark gray 4. orange cast is more prevalent on top of the back between the shoulder blades will be a greasy patch due to constant localized irritation. Since chronic and/or obvious cases of mange are more prone to being treated, I would like to focus further on the less obvious sub clinical “carriers.” Often potbellies are vaccinated and wormed twice a year as part of their normal medical management routine. In addition many owners worm in-between those visits to the veterinarian. A preexisting case of mange is probably kept somewhat under control in these cases, although it is never really eradicated. Therefore, lingering subtle symptoms are present but not always recognized. Skin and hair coat damage from mange mites is gradual and may be misinterpreted as due to inadequate diet and/or vitamins. Many times, I have been told of a change in temperament of the pig. The pig may become a little more lethargic, less “sweet,” cranky and less tolerant of being handled. Since worming is being done at least twice a year and there are no obvious symptoms, mange is seldom considered the culprit. In almost all cases the dirty ears are the big tip-off. Proper treatments for mange, even without a confirmed diagnosis almost always results in improvement in general condition, appearance, and temperament of the pig. Understanding the life cycle of the mange mite helps tailor treatments accordingly in order to kill the mites continuously as more eggs hatch, thereby effectively breaking up the cycle. Since mites eventually find their way to the pig’s head, especially the ears, I have found it necessary to treat the ears specifically and in conjunction with the rest of the pig’s body. The canals in the ears are dark and moist and serve as a perfect protected environment for the mange mite. The ear tissue is very thin which makes it easy to penetrate. Often the ears will exude excessive amounts of dark red-brown debris. Similar debris is often found in the corners of the eyes and in any wrinkles about the face. Alex Hogg, DVM, University of Nebraska, shared a diagnostic approach with me: - Scrape deeply in the ear of the pig with curette or small melon baller. Get some skin and debris – sometimes you need to see a little blood to be sure you are deep enough. - Put all the debris in a small, clear, plastic petri dish. - Cover debris with one teaspoon baby oil. - Incubate petri dish at 37 degrees Celsius (body temperature) overnight. - The mites will come out of the debris, skin, etc. and can be observed swimming in the baby oil the next day. - Place under a 10 X dissecting microscope to examine. Ears: My personal feeling is that mites are able to survive for a time in the debris in the ears, rather than in the ear tissue itself. The ear is the only place where debris can build up significantly without dropping off the body. By the time the mites have exhausted the debris and need to tunnel back into the skin, any mange treatment given the pig has worn off. If the ears are treated separately and simultaneously with the rest of the protocol, the whole pig clears up faster. I have had very good results with the following: - Prepare a mix of 1/2 hydrogen peroxide and 1/2 isopropyl alcohol and put in a small plastic bottle with a tapered spout on the end. Warm solution before each use – the pig will object less. - Squirt warm solution directly into each ear. Try to rub and massage the ears to work the solution down deep into the ears. - The solution will help loosen any large pieces of debris that may be lodged where you can’t see. The pig will shake its head which will also help free up chunks of gunk. - Try to clean as much discolored debris out of each ear. Never go deeper into the ear than you can see. - Place recommended number of drops of Tresaderm® into each ear and massage. This is a dog/cat ear preparation for ear mites. It also contains an antibiotic that reduces inflammation of the sensitive ear tissue. Note: I didn’t say any of this would be easy . . . I’m just saying it works! Repeat this procedure every other day for five treatments. Always clean the ears first so that the medication can get down deep enough to work. After five treatments, check ears weekly for signs of re-infestation. If needed, repeat above steps for five more treatments. There are various treatment protocols for mange on the body, depending on the severity of infestation. Injectables remain the drugs of choice since they can reach all parts of the pig, except perhaps the ears for reasons previously stated. The following, as well as the earlier described ear treatment, is based on my personal experience. As with any health issue concerning your potbelly, always consult with your veterinarian and follow his/her recommendations. Before administering any medication to your potbelly you need an accurate weight. That doesn’t mean a “guesstimate.” That means actually weigh your pig. For any drug to be successful, the dosage must be accurate. Prevention and Maintenance A preventive maintenance regimen can be achieved by using a pour-on topical, an oral, or an injectable preparation designed to kill both internal and external parasites. Pour-ons work best on pigs with a fairly heavy coat since the medication needs the hair shaft to penetrate into the body. Frequent close examination of the pig’s skin and ears should inform you of the presence or absence of mange. If no signs of mange are evident, periodic treatment according to your veterinarian’s recommendations, can keep your pig mange free. Spring and fall seem to be the two most common times of year when mange mites are prevalent. However, depending on climate and immediate environment, infestation can occur any time of the year. I have found the following to be effective. Ivomec® Pour-On for Cattle and Ivomec® Injection for Cattle and Swine are ivermectin-based drugs. Ivomec® is the Merial Ltd. registered trademark for ivermectin. Another injectable drug option is Dectomax®. This is the registered trademark of Pfizer Animal Health for the drug doramectin. Dectomax® lasts eighteen days in the pig’s system. It is virtually pain free. Dectomax® may prove to be a good alternative to Ivomec® for this reason. Seek the advise of your veterinarian as to which product will produce the best result and recommended treatment regimen. The Occasional Out-Break For the occasional out-break brought on by exposure to another pig or simply environmental conditions, topical or injectable preparations can be quite effective in stopping the mite in its tracks, provided treatment commences early on. Use product and protocol recommended by your vet but increase the number of treatments to at least three and possibly four. Treatments must be given at proper intervals. In addition, the ears should be treated at the same time following the procedure outlined. If these protocols seem like “overkill”, bear in mind that it is worth the expense and time spent, rather than dealing with re-infestation. In my experience, when mange becomes a chronic condition in a pig, the luxury of a topical preparation is no longer an option. A more aggressive approach will be needed. The most effective treatment is going to be the injection, and it will ultimately be the least stressful for the pig. Coupled with Tresaderm® in the ears, the injections will need to be timed to interrupt the life cycle of the mite. The whole treatment protocol should be structured around the level of infestation, the possible exposure to re-infestation (as in a multi-pig situation), and the pig’s general environment such as bedding, housing, etc. For chronic mange, my protocol is: - Isolate pig(s) in question providing a clean, dry space either indoors or outdoors with fresh bedding. - Begin with first injection of at appropriate dose level based on accurate weight. - Begin ear treatment as previously described simultaneously with the injection. - Administer second dose 10 days later. - Continue ear treatments with Tresaderm®. - While it is not necessary to treat the bedding, it is advisable to change the bedding at the same time each injection is given. - Administer third injection 10 days later. - Discontinue treating ears. - Depending upon each pig’s response, a fourth injection may be needed. - Continue to check ears frequently for any signs of re-infestation. Occasionally, in severe and/or long-standing cases of mange, I have encountered an additional problem. Due to self-mutilation by constantly scratching and rubbing, some pigs can break open the skin enough to set off a positive gram staph infection which further complicates their condition. This infection impedes healing, promotes further hair loss, while mites may no longer be present, the overall condition of the pig does not improve. In these cases antibiotic therapy has been necessary. Continue to examine the pig even after mange treatments are complete. Appreciable improvement in skin, hair coat, eyes, ears and even temperament can take as long as thirty days. Remember that mange is highly contagious from pig-to-pig through physical contact. If one animal in your group of pigs is showing signs of mite infestation, the chances are others can or will be plagued as well. In multiple pig situations, a preventive mange control program should be followed. Treat all pigs on premise simultaneously at least biannually. When considering any mange prevention or control program, check with your veterinarian as to appropriate drug, dosage, and administration based on your individual needs. Again, the mange mite can be a tough critter to irradiate. The best ammunition against infestation is as follows: - Do frequent skin and ear checks. - Get timely diagnosis by your veterinarian of the presence of mites. - Have a good understanding of the life cycle of the mite. - Obtain an accurate weight of your pig. - Consult with your veterinarian on appropriate medication. - There is no short cut – all medication directions must be followed. Subsequent treatments must be given at proper intervals in order to break the life cycle of the mite. - Treat ears simultaneously with Tresaderm®. Maybe we can conquer all mange mite dilemma before another 140 years go by! After all, we humans are WAY more might than that pesky, microscopic mite! About the Author: Jenny Blaney has been monitoring potbellied pigs since 1989. Her special interests lie in genetics, medical and health issues related to these unique critters. If you have any questions please call Jenny Blaney 518-747-3494
<urn:uuid:ae5c9781-6fad-4ad3-ab3d-b8783761c2c9>
seed
When cuts, scratches, and burns demand quick treatment, the pharmacist can identify the best products and procedures to promote healing and prevent infection. Pharmacists often encounter patients seeking guidance in selecting nonprescription products for the management and treatment of minor wounds and burns. Proper wound care is essential to the healing process and can reduce or prevent the possibility of scar formation and secondary bacterial skin infections.1 Wound healing begins immediately after an injury occurs; the 3 phases of the healing process are the inflammatory, proliferative, and maturation (remodeling) phases.1 Currently, a host of nonprescription first aid products are available for the self-treatment of minor wounds, such as scrapes, scratches, cuts, and burns. Pharmacists can provide guidance on the selection and proper use of these products. They can also ascertain the appropriateness of self-treatment and refer patients to seek medical care from their primary health care provider when self-care is not deemed appropriate. Because certain pharmacologic agents or medical conditions can hinder or impair healing, pharmacists should remind patients about the importance of adhering to proper wound care protocols for minor wound care and encourage them to seek the advice of their primary care provider if needed. Classification of Wounds and Burns Both the type and severity of a wound or burn are critical in deciding the best care protocol. Typically, wounds are classified according to their acuity and depth.1 Common types of acute wounds include abrasions, punctures, and lacerations.1 Abrasions involve an injury of the epidermal portion of the skin, typically caused by rubbing or friction.1,2 Punctures occur when a sharp object has pierced the epidermal layer and lodged into the dermis or deeper tissues,1 and lacerations are caused by sharp objects that have pierced several layers of skin.1,2 In general, if acute wounds such as abrasions or puncture wounds do not extend beyond the dermis, self-care is warranted.1 Patients with chronic wounds should always be advised to immediately seek medical attention for proper wound care to prevent further complications, such as infection. With regard to burns, superficial burns—along with some superficial partial-thickness burn injuries—are the only types of burns that are suitable for self-treatment.1 Individuals with deeper burns should always be referred for medical evaluation. Treating Minor Wounds and Burns The overall goals of wound and burn treatment are to promote healing, prevent infection or further complications, provide physical protection, and minimize the effects of scarring.1 OTC products currently on the market for self-treatment of wounds include topical antibiotics (eg, bacitracin, neomycin, and polymixin B sulfate), wound irrigants, wound antiseptics, various types of bandages including medicated bandages with topical antibiotics, and products that help reduce the appearance of scars. Bandages are available in waterproof form, liquid bandage form, and in latex-free form for those with allergies. Nonprescription products available for minor burns include skin protectants, skin protectants with and without antiseptics, and local anesthetics. In some cases, patients experiencing pain associated with a minor burn may benefit from taking OTC nonsteroidal anti-inflammatory drugs or acetaminophen on a short-term basis if appropriate and if no contraindications are present.7 Wound Irrigants and First Aid Antiseptics Wound irrigation may be warranted to clean the wound surface if dirt or debris is present.1 In this case, a normal saline solution or a mild soap and water can be used. In addition, topical antiseptics can be used to disinfect the skin, but should only be applied to intact skin up to the edges of the wound.1 Examples of first aid antiseptics include ethyl alcohol (48%-95%), isopropyl alcohol (50%-91.3%), iodine topical solution USP, iodine tincture USP, povidone-iodine complex (5%-10%), camphorated phenol, quaternary ammonium compounds, and topical hydrogen peroxide solution (0.13%).1 Topical First Aid Antibiotics Topical OTC antibiotics are indicated for the prevention of infection in minor cuts, wounds, scrapes, and burns. They should be applied after a wound has been cleansed, and prior to the application of a sterile dressing.1 Band-Aid (Johnson & Johnson) offers a medicated bandage product that contains a topical antibiotic for added convenience. Neosporin ointment (Johnson & Johnson) is available in a small, portable container or in a single-use dosage form. Skin protectants such as allantoin and white petrolatum are recognized by the FDA as safe and effective for the temporary protection of minor burns and provide only symptomatic relief.3 Skin protectants can shield burns from mechanical irritation caused by friction, prevent the drying of the stratum corneum, and minimize the pain associated with minor burns.3 Topical anesthetics, which can help relieve pain associated with minor burns, work by inhibiting the transmission of signals from pain receptors. These products are typically applied no more than 3 to 4 times a day as needed.3 The 2 most common topical anesthetics found in nonprescription products include benzocaine, in strengths of 5% to 20%, and lidocaine (0.5%-4%).3 For wounds and burns to heal properly, proper care is essential. Studies have shown that with uncovered wounds, there are increased risks of scarring, possible infection, and reinjury. Covering the wound to create a moist healing environment is now considered the standard of care, because it appears to accelerate healing and may minimize scaring and reduce incidence of infection.1,4,5 Typically, dressings should be changed every 3 to 5 days unless otherwise directed, as frequent dressing changes may remove resurfacing epithelial layers and may hinder or slow down the healing process.1,4,5 Patients should be counseled to continue using dressings until the wound shows signs of healing, and should always be advised to seek medical care for wounds that do not exhibit any signs of healing after 5 days of self-treatment, or wounds that show signs of infection. Patients should be reminded that using ice on burns should be avoided, because it may cause vasoconstriction to the area and make the burn worse.3,6 If a burn does not show signs of healing or appears to worsen or show signs of infection after 7 days of treatment, patients should be advised to seek further medical care.3 During counseling, pharmacists can also suggest that patients have a first aid kit handy in case of emergencies, and be sure to routinely check that the items in the kit are not expired. Patients can either purchase first aid kits that already include the essential items, or assemble their own kits. The American Red Cross Web site includes first aid tips and lists the 10 most common first aid myths. To access the site, go to www.redcross.org/email/safetynet/v1n9/firstaid.asp. Ms. Terrie is a clinical pharmacy writer based in Haymarket, Virginia. 1. Benard D. Minor wounds and secondary bacterial infections. In: Berardi R, Newton G, McDermott JH, et al, eds. Handbook of Nonprescription Drugs. 16th ed. Washington, DC: American Pharmacists Association; 2009:759-773. 2. Lacerations. Merck Manual for Healthcare Professionals Online. www.merckmanuals.com/professional/sec22/ch328/ch328a.html#v1110280. Accessed July 18, 2011. 3. Prince V. Minor burns and sunburns. In: Berardi R, Newton G, McDermott JH, et al, eds. Handbook of Nonprescription Drugs. 16th ed. Washington, DC: American Pharmacists Association; 2009:745-758. 4. Proper wound care: clean, treat, protect. Band Aid Web site. www.bandaid.com/proper-wound-care/clean-treat-protect. Accessed July 18, 2011. 5. Neosporin Web site. www.neosporin.com/firstaid/pdf/sciencefactsheet.pdf. Accessed July 18, 2011. 6. Basic burn care/first aid burn treatment. Massachusetts General Hospital Web site. www2.massgeneral.org/burns/patients/. Accessed July 20, 2011.
<urn:uuid:1ff04d29-0d38-4eca-80a7-a91e7210081f>
seed
REHOVOT, Israel - October 22, 1998 - For many leukemia sufferers, bone marrow transplantation is their only hope. Unfortunately, in about 40 percent of terminal cases, patients fail to find a perfectly matched donor among relatives or in any of the donor registries. Now, scientists from Israel's Weizmann Institute of Science and Perugia University in Italy have shown that thanks to a method they developed, transplants using mismatched marrow can be as effective as those in which the donor and recipient are fully matched. The results of their latest study, reported in the October 22 issue of the New England Journal of Medicine, have raised hopes that one day a donor will be found for virtually every candidate for a bone marrow transplant. Normally, a donor and recipient are considered compatible when they are matched for all six immunological markers on their chromosomes - three inherited from the mother and three from the father. In the method developed by a team headed by Prof. Yair Reisner of Weizmann's Immunology Department and Prof. Massimo Martelli of Perugia's Policlinico Monteluce, the donor and the recipient need to be matched for only three markers. Such a partial match is always found between parents and children, and there is a 75-percent chance of finding it between siblings. Even among the extended family, the chances of finding a partially compatible donor are fairly good. A key element of the Weizmann-Perugia method is the use of extremely large doses of donor marrow. The donor is treated with hormone injections that release large numbers of stem cells from the bone marrow into the bloodstream. In a procedure known as leukapheresis, the stem cells are selectively removed from blood withdrawn from the body, and the remaining blood is re-infused to the donor. In another crucial step, donated stem cells are then "cleansed" to erase the characteristics that contribute to rejection in mismatched transplants. In the study, the Perugia-Weizmann team traces the results of dozens of such mismatched transplants performed on patients with high-risk acute myeloid leukemia or acute lymphoid leukemia between 1995 and 1997. Of the 43 patients treated, 16 were free of disease when the study results were summed up. To appreciate this figure, one must keep in mind that all patients had failed to respond to any other treatment, and without a transplant would have certainly died. The rest of the patients were alive but had a relapse of leukemia, or had died of the disease or of transplant-related complications. These results are similar to the success rate obtained in this category of patients with perfectly matched transplants from unrelated donors. According to the scientists, the study shows that their method overcomes the main obstacles limiting the use of mismatched transplants - namely, graft failure and an adverse immunological reaction called graft-versus-host disease. "Since most patients have a mismatched relative (who can serve as a bone marrow donor), advances in this area will greatly increase the availability of transplants as curative therapy," the researchers conclude in their report. Several hospitals in Israel, Germany, Austria and the United States have begun to introduce the Perugia-Weizmann transplantation method. In January 1999, Prof. Reisner and Prof. Martelli will host an international symposium in Eilat, Israel, for some 60 physicians interested in applying the new method. The participants, most of them heads of transplantation departments in their respective hospitals, will come from Austria, Denmark, France, Germany, Israel, Italy, Spain, Switzerland, The Netherlands, United Kingdom and United States. Prof. Reisner holds the Henry H. Drake Professorial Chair in Immunology at the Weizmann Institute. This research was funded in part by Rowland Schaefer, Miami, Florida; the Pauline Fried Estate, Los Angeles, California; the Concern Foundation, Los Angeles, California; the Israel Academy of Sciences and Humanities; Comitato per la Vita "Daniele Chianelli"; Associazione Italiana Ricerche sul Cancro (AIRC); Associazione Italiana Leucemie e Linfomi (AIL), and Istituto Superiore di Sanita, Italy-USA Program on Therapy of Tumors. The Weizmann Institute of Science, in Rehovot, Israel, is one of the world's foremost centers of scientific research and graduate study. Its 2,400 scientists, students, technicians, and engineers pursue basic research in the quest for knowledge and the enhancement of the human condition. New ways of fighting disease and hunger, protecting the environment, and harnessing alternative sources of energy are high priorities. The above story is based on materials provided by Weizmann Institute Of Science. Note: Materials may be edited for content and length. Cite This Page:
<urn:uuid:f07b2104-23a7-4999-b298-04ea1934dddf>
seed
National Organization for Rare Disorders, Inc. It is possible that the main title of the report Gastroparesis is not the name you expected. Please check the synonyms listing to find the alternate name(s) and disorder subdivision(s) covered by this report. - gastroparesis diabeticorum - gastrointestinal autonomic neuropathy - gastric stasis - gastric atony - delayed gastric emptying - gastric dysmotility - severe functional dyspepsia Gastroparesis (abbreviated as GP) represents a clinical syndrome characterized by sluggish emptying of solid food (and more rarely, liquid nutrients) from the stomach, which causes persistent digestive symptoms especially nausea and primarily affects young to middle-aged women, but is also known to affect younger children and males. Diagnosis is made based upon a radiographic gastric emptying test. Diabetics and those acquiring gastroparesis for unknown (or, idiopathic) causes represent the two largest groups of gastroparetic patients; however, numerous etiologies (both rare and common) can lead to a gastroparesis syndrome. Gastroparesis is also known as delayed gastric emptying and is an old term that does not adequately describe all the motor impairments that may occur within the gastroparetic stomach. Furthermore, there is no expert agreement on the use of the term, gastroparesis. Some specialists will reserve the term, gastroparesis, for grossly impaired emptying of the stomach while retaining the label of delayed gastric emptying, or functional dyspepsia (non-ulcer dyspepsia), for less pronounced evidence of impaired emptying. These terms are all very subjective. There is no scientific basis by which to separate functional dyspepsia from classical gastroparesis except by symptom intensity. In both conditions, there is significant overlap in treatment, symptomatology and underlying physiological disturbances of stomach function. For the most part, the finding of delayed emptying (gastric stasis) provides a "marker" for a gastric motility problem. Regardless, the symptoms generated by the stomach dysmotility greatly impair quality of life for the vast majority of patients and disable about 1 in 10 patients with the condition. While delayed emptying of the stomach is the clinical feature of gastroparesis, the relationship between the degree of delay in emptying and the intensity of digestive symptoms does not always match. For instance, some diabetics may exhibit pronounced gastric stasis yet suffer very little from the classical gastroparetic symptoms of: nausea, vomiting, reflux, abdominal pain, bloating, fullness, and loss of appetite. Rather, erratic blood-glucose control and life-threatening hypoglycemic episodes may be the only indication of diabetic gastroparesis. In another subset of patients (diabetic and non-diabetic) who suffer from disabling nausea that is to the degree that their ability to eat, sleep or carry out activities of daily living is disrupted gastric emptying may be normal, near normal, or intermittently delayed. In such cases, a gastric neuro-electrical dysfunction, or gastric dysrhythmia (commonly found associated with gastroparesis syndrome), may be at fault. Therefore, these disorders of functional dyspepsia, gastric dysrhythms, and gastroparesis are all descriptive labels sharing similar symptoms and perhaps representing a similar entity of disordered gastric neuromuscular function. For this reason, a more encompassing term, gastropathy, can be used interchangeably with gastroparesis. Association of Gastrointestinal Motility Disorders, Inc. 12 Roberts Drive Bedford, MA 01730 American Diabetes Association 1701 N. Beauregard Street Alexandria, VA 22311 Digestive Disease National Coalition 507 Capitol Court, NE Washington, DC 20002 NIH/National Institute of Diabetes, Digestive & Kidney Diseases Office of Communications & Public Liaison Bldg 31, Rm 9A06 31 Center Drive, MSC 2560 Bethesda, MD 20892-2560 International Foundation for Functional Gastrointestinal Disorders 700 W. Virginia St., 201 Milwaukee, WI 53217 Genetic and Rare Diseases (GARD) Information Center PO Box 8126 Gaithersburg, MD 20898-8126 International Scleroderma Network 7455 France Ave So #266 Edina, MN 55435-4702 Gastroparesis & Dysmotilities Association 5520 Dalhart Hill N.W. Calgary, AB, T3A 1S9 For a Complete Report This is an abstract of a report from the National Organization for Rare Disorders (NORD). A copy of the complete report can be downloaded free from the NORD website for registered users. The complete report contains additional information including symptoms, causes, affected population, related disorders, standard and investigational therapies (if available), and references from medical literature. For a full-text version of this topic, go to www.rarediseases.org and click on Rare Disease Database under "Rare Disease Information". The information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only. NORD recommends that affected individuals seek the advice or counsel of their own personal physicians. It is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report This disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder. For additional information and assistance about rare disorders, please contact the National Organization for Rare Disorders at P.O. Box 1968, Danbury, CT 06813-1968; phone (203) 744-0100; web site www.rarediseases.org or email [email protected] Last Updated: 3/16/2012 Copyright 2009, 2012 National Organization for Rare Disorders, Inc. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:e45c12ae-229c-4d62-a4b8-2a2eb2427870>
seed
July 22, 2014 Vomiting blood is the forcing of the stomach contents up through the esophagus (the swallowing tube) and out of the mouth, in which the vomit contains blood. When blood is vomited, it may appear either a bright red or dark red color. Only blood may be seen, or the blood may come up mixed with food. Back to TopAlternative Names Hematemesis; Blood in the vomit Back to TopCauses The upper GI tract includes the stomach, mouth, throat, esophagus, and the first part of the small intestine. Blood that is vomited may come from any one of these places. For example, vomiting that is very forceful or continues for a very long time may cause a tear in the small blood vessels of the throat or the esophagus, producing streaks of blood in the vomit. Swollen veins in the walls of the lower part of the esophagus, and sometimes the stomach, may begin to bleed. These veins are present in people with severe liver damage. Other causes may include: - Bleeding ulcer in the stomach, first part of the small intestine, or esophagus - Defects in the blood vessels of the GI tract - Swelling, irritation, or inflammation of the esophagus lining (esophagitis) or the stomach lining (gastritis) - Swallowing blood (for example, after a nosebleed) - Tumors of the stomach or esophagus Back to TopHome Care Although not all situations are the result of a major medical problem, this is difficult to know without a medical evaluation. Seek immediate medical attention. Back to TopWhen to Contact a Medical Professional Call your doctor or go to the emergency room if vomiting of blood occurs -- this requires immediate medical evaluation. Back to TopWhat to Expect at Your Office Visit The doctor will examine you and ask questions such as: - When did the vomiting begin? - Have you ever vomited blood before? - How much blood was in the vomit? - What color was the blood? (Bright red or like coffee grounds?) - Have you had any recent nosebleeds, surgeries, dental work, vomiting, stomach problems, or severe coughing? - What other symptoms do you have? - What medical conditions do you have? - What medicines do you take? - Do you drink alcohol or smoke? Tests that may be done include: - Blood work, such as a complete blood count (CBC), blood chemistries, blood clotting tests, and liver function tests - Esophagogastroduodenoscopy (EGD) - Nuclear medicine scan - Rectal examination - Tube through the nose into the stomach to check for blood If you have vomited a lot of blood, you may need emergency treatment, which may include: - Blood transfusions - Fluids through a vein - Medications to decrease stomach acid - Possible surgery if bleeding does not stop Back to TopReferences Overton DT. Gastrointestinal bleeding. In: Tintinalli JE, Kelen GD, Stapczynski JS, Ma OJ, Cline DM, eds. Emergency Medicine: A Comprehensive Study Guide . 6th ed. Columbus, OH: McGraw-Hill; 2006:chap 74. Henneman PL. Gastrointestinal bleeding. In: Marx JA, Hockberger RS, Walls RM, et al, eds. Rosen's Emergency Medicine: Concepts and Clinical Practice . 7th ed. Philadelphia, Pa: Mosby Elsevier;2009:chap 22. MOST POPULAR - HEALTH - Well: A Sleep Apnea Test Without a Night in the Hospital - Well: Probiotic Logic vs. Gut Feelings - Well: Let's Cool It in the Bedroom - Well: Train Like a German Soccer Star - Well: How Many Americans Are Lesbian, Gay or Bisexual? - Recipes for Health: Whole Wheat Focaccia With Tomatoes and Fontina - Well: Beyond Salty and Sweet: A Budding Club of Tastes - The New Old Age: An 'Emotional Burden' Rarely Discussed - Well: 3 Things to Know About Niacin and Heart Health - Well: The CPR We Don't See on TV
<urn:uuid:13802cd3-f156-4d4e-a548-05370fbafec4>
seed
In the new study, seniors who regularly used this healthy monounsaturated fat had a 41% lower risk of stroke compared to their counterparts who never used olive oil. “This is the first study to suggest that greater consumption of olive oil may lower risk of stroke in older subjects, independently of other beneficial foods found in the Mediterranean diet,” study author Cecilia Samieri, PhD, with the University of Bordeaux and the National Institute of Health and Medical Research in Bordeaux, France, says in an email. So what exactly is it about olive oil that may lower stroke risk? There are several theories, she says. It may be that people choose olive oil over saturated, artery-clogging fats. “Moreover, previous research found that the polyphenols from virgin olive oil account specifically for its ability to lower oxidized low-density lipoprotein (LDL)” or bad cholesterol. High cholesterol levels are a known risk factor for stroke. Researchers analyzed the medical records of 7,625 people aged 65 and older from three French cities who had no history of stroke. Participants were categorized based on their olive oil intake. Study participants mainly chose extra virgin olive oil, which is widely available in France. During slightly more than five years of follow-up, there were 148 strokes. It is too early to issue any broad public health recommendations about the use of olive oil for stroke protection. “These findings from an observational study should be confirmed by a randomized, controlled trial,” Samieri says. Researchers also looked at the blood levels of oleic acid in a subgroup of people and found that higher levels of oleic acid correlated with higher use of olive oil. Oleic acid, the main monounsaturated fat found in olive oil, is not a specific blood marker for olive oil use and could be elevated as a result of eating other foods such as butter and duck fat. Too Early to Make Recommendations This is one of very few studies that looks at olive oil intake and risk for neurologic diseases, including stroke, Nikolaos Scarmeas, MD, of Columbia University in New York City, says in an email. “Maybe olive oil improves vascular risk factors such as hypertension, dyslipidemia, diabetes, heart disease, obesity, which may in turn reduce stroke risk, or it may be that olive oil is anti-inflammatory or an antioxidant.” Scarmeas writes an accompanying editorial. “We do not know for sure, and we do not know which particular aspect of olive oil is the most relevant to stroke,” he says. “Following a 'healthy diet' emerges as an important strategy for prevention of neurological disease, but remains to be proved.” Cathy A. Sila, MD, the George M. Humphrey II Professor of Neurology and the director of the Stroke & Cerebrovascular Center at the Neurological Institute Case Medical Center of Case Western Reserve University School of Medicine in Cleveland, Ohio, says the benefits of diet and lifestyle choices in disease prevention are more important than ever, given the rising costs of health care. She agrees with the study authors and editorialist that it is too early to make any recommendations about olive oil intake and stroke risk. She calls the findings “intriguing” but says they “do not equate a randomized clinical trial and should be used with appropriate caution in making broad recommendations.” Suzanne Steinbaum, MD, director of women and heart disease at Lenox Hill Hospital in New York, says moderate use of olive oil in cooking and on bread may help protect against stroke in people older than 65. “Olive oil is a healthy fat and it can reduce cholesterol and inflammation, and has been shown to help reduce the incidence of heart disease,” she says. “Now, we see it may reduce stroke risk in people older than 65."
<urn:uuid:47a67aee-0640-41ea-9bc6-d3000be47e90>
seed
Coping Skills Training for Adolescents With Fibromyalgia Juvenile fibromyalgia is a chronic pain condition that can cause considerable suffering and difficulty in an adolescent's day-to-day activities. The purpose of this study is to determine whether coping skills training, when combined with usual medical care, can reduce pain and disability in adolescents with fibromyalgia. Study hypotheses: 1) Adolescents who receive coping skills training combined with their usual medical care will show significantly greater reductions in functional disability, pain, and depressive symptoms at the end of the acute treatment phase than adolescents who receive fibromyalgia education with their usual medical care. 2) Adolescents who receive coping skills training with their usual medical care will show significantly lower levels of functional disability, pain, and depressive symptoms at the end of a six-month maintenance phase than adolescents who receive fibromyalgia education with their usual medical care. Behavioral: Coping Skills Training |Study Design:||Allocation: Randomized Endpoint Classification: Efficacy Study Intervention Model: Parallel Assignment Masking: Single Blind (Investigator) Primary Purpose: Treatment |Official Title:||Randomized Clinical Trial in Juvenile Fibromyalgia| - Change in FDI (Functional Disability Inventory) Scores at End of Study [ Time Frame: Baseline and 6 months (end of study) ] [ Designated as safety issue: No ]Functional disability score is measured by the Functional Disability Inventory (FDI)which assesses ability to engage in usual physical, social and recreational activities. Scores range from 0=no disability to 60 = extreme disability and and are interpreted as No/Mild disability (0-12); Moderate Disability (13-29) and Severe Disability (30-60) - Pain Intensity [ Time Frame: 9 weeks and 6 months ] [ Designated as safety issue: No ] - Depressive Symptoms [ Time Frame: 9 weeks and 6 months ] [ Designated as safety issue: No ] |Study Start Date:||July 2004| |Study Completion Date:||July 2010| |Primary Completion Date:||July 2010 (Final data collection date for primary outcome measure)| Experimental: Coping Skills Patients will receive 8 weeks of behavioral training in pain coping strategies Behavioral: Coping Skills Training 8 weekly sessions of behavioral treatment Other Name: cognitive-behavioral therapy Active Comparator: Education Patient will receive 8 weekly sessions of education about fibromyalgia syndrome. 8 weekly sessions of fibromyalgia education Juvenile Primary Fibromyalgia Syndrome (JPFS) is a debilitating chronic pain condition that occurs in adolescence and is characterized by persistent pain, multiple tender points, sleep difficulty, and fatigue. The cause of JPFS is unknown and there is no known cure. Children and adolescents with JPFS have difficulty with daily functioning, miss a great deal of school, and experience increased emotional distress compared to their peers. Fibromyalgia syndrome appears to be resistant to treatment in adulthood, so early behavioral treatment for JPFS with long-term beneficial effects would be useful. This study will evaluate the efficacy of coping skills training (CST) when combined with usual medical care in reducing functional disability, pain intensity, and depressive symptoms in adolescents with JPFS. This study will also determine whether improvements can be sustained long-term. This study will last 34 weeks. Participants will be recruited from three pediatric rheumatology clinics. Patients will be randomly assigned to one of two groups: CST plus usual medical care or education plus usual medical care. There will be 6 medical visits, spaced 4 to 5 weeks apart. In addition, patients will attend 8 individual sessions of CST or education over the first 8 weeks of the study. CST sessions will include training in cognitive-behavioral techniques of pain management for the adolescent and behavioral management techniques for their parents. Education sessions will include education on fibromyalgia and discussion about lifestyle issues, but no training in pain management procedures. Patients will be evaluated at Week 9 and will be followed for an additional 6-month maintenance phase. During this maintenance phase, adolescents will continue to receive their usual medical care and will attend 2 additional sessions of CST or education. There will be one final evaluation at the end of the maintenance phase. Please refer to this study by its ClinicalTrials.gov identifier: NCT00086047 |United States, Kentucky| |Kosair Charities Pediatric Center| |Louisville, Kentucky, United States, 40202| |United States, Ohio| |Cincinnati Children's Hospital Medical Center| |Cincinnati, Ohio, United States, 45229| |Cleveland Clinic Foundation, Division of Pediatrics| |Cleveland, Ohio, United States, 44195| |Principal Investigator:||Susmita Kashikar-Zuck, PhD||Children's Hospital Medical Center, Cincinnati|
<urn:uuid:ce3aced5-dda4-4128-be5c-4f01e4225a17>
seed
Creating a free account will enable you to subscribe to our daily and weekly email newsletters, as well as customize your reading experience to show only the categories most relevant to you. Signing up only take a few minutes, so why not give it a try and see what you've been missing out on. Scientists at USC have created a mathematical model that explains and predicts the biological process that creates antibody diversity - the phenomenon that keeps us healthy by generating robust immune systems through hypermutation. The work is a collaboration between Myron Goodman, professor of biological sciences and chemistry at the USC Dornsife College of Letters, Arts and Sciences; and Chi Mak, professor of chemistry at USC Dornsife. "To me, it was the holy grail," Goodman said. "We can now predict the motion of a key enzyme that initiates hypermutations in immunoglobulin (Ig) genes." Goodman first described the process that creates antibody diversity two years ago. In short, an enzyme called "activation-induced deoxycytidine deaminase" (or AID) moves up and down single-stranded DNA that encodes the pattern for antibodies and sporadically alters the strand by converting one nitrogen base to another, which is called "deamination." The change creates DNA with a different pattern - a mutation. These mutations, which AID creates a million-fold times more often than would otherwise occur, generate antibodies of all different sorts - giving you protection against germs that your body hasn't even seen yet. "It's why when I sneeze, you don't die," Goodman said. In studying the seemingly random motion of AID up and down DNA, Goodman wanted to understand why it moved how it did, and why it deaminated in some places much more than others. "We looked at the raw data and asked what the enzyme was doing to create that," Goodman said. He and his team were able to develop statistical models whose probabilities roughly matched the data well, and were even able to trace individual enzymes visually and watch them work. But they were all just approximations, albeit reasonable ones. Collaborating with Mak, however, offered something better: a rigorous mathematical model that describes the enzyme's motion and interaction with the DNA and an algorithm for directly reading out AID's dynamics from the mutation patterns. At the time, Mak was working on the mathematics of quantum mechanics. Using similar techniques, Mak was able to help generate the model, which has been shown through testing to be accurate. "Mathematics is the universal language behind physical science, but its central role in interpreting biology is just beginning to be recognized," Mak said. Goodman and Mak collaborated on the research with Phuong Pham, assistant research professor, and Samir Afif, a graduate student at USC Dornsife. An article on their work, which will appear in print in the Journal of Biological Chemistry on October 11, was selected by the journal as a "paper of the week." Next, the team will generalize the mathematical model to study the "real life" action of AID as it initiates mutations during the transcription of Ig variable and constant regions, which is the process needed to generate immunodiversity in human B-cells. This research was funded by National Institutes of Health (grants ES013192 and GM21422), and by the National Science Foundation (grant CHE-0713981). Article adapted by Medical News Today from original press release. Click 'references' tab above for source. Visit our Immune System / Vaccines category page for the latest news on this subject. Please use one of the following formats to cite this article in your essay, paper or report: University of Southern California. "First-of-its-kind mathematical model for the biological process that keeps your immune system working." Medical News Today. MediLexicon, Intl., 10 Oct. 2013. Web. 11 Dec. 2013. <http://www.medicalnewstoday.com/releases/267155> University of Southern California. (2013, October 10). "First-of-its-kind mathematical model for the biological process that keeps your immune system working." Medical News Today. Retrieved from Please note: If no author information is provided, the source is cited instead. If you write about specific medications, operations, or procedures please do not name healthcare professionals by name. For any corrections of factual information, or to contact the our editorial team, please use our feedback form. Please send any medical news or health news press releases to: Note: Any medical information published on this website is not intended as a substitute for informed medical advice and you should not take any action before consulting with a health care professional. For more information, please read our terms and conditions. This page was printed from: http://www.medicalnewstoday.com/releases/267155.php Visit www.medicalnewstoday.com for medical news and health news headlines posted throughout the day, every day. © 2004-2013 All rights reserved. MNT (logo) is the registered trade mark of MediLexicon International Limited.
<urn:uuid:301e5de9-44d3-439e-960e-2c244279c948>
seed
Scientists at the Karolinska Institute along with other colleagues in Sweden have provided new information about the EphA2 receptor, which is associated with cancer. The researchers relied on DNA origami, in which a DNA molecule is shaped into a nanostructure and then used to test theories about cell signaling. Their study (“Spatial control of membrane receptor function using ligand nanocalipers”) appears in Nature Methods. It was previously known that the EphA2 receptor played a part in several forms of cancer including breast cancer. The ligand that communicates with the receptor is known as an ephrin molecule. Researchers have been working with the hypothesis that the distance between different ligands—in this case the distance between ephrin molecules—affects the level of activity in the communicating receptor of the adjacent cells. The Swedish scientists set out to test this hypothesis. They used DNA building blocks to form a stable rod. “We use DNA as the construction material for a tool that we can experiment with,” said Björn Högberg, Ph.D., principal investigator in the department of neuroscience. “The genetic code of the DNA in these structures is less important in this case.” The team attached ephrins to the DNA rod at various intervals, e.g., 40 or 100 nanometers apart. The DNA rods were then placed in a solution containing breast cancer cells. In the next step, the researchers looked at how active EphA2 was in these cancer cells. “Using these 'nanocalipers' to present ephrin ligands, we showed that the nanoscale spacing of ephrin-A5 directs the levels of EphA2 receptor activation in human breast cancer cells,” wrote the investigators. “Furthermore, we found that the nanoscale distribution of ephrin-A5 regulates the invasive properties of breast cancer cells. Our ligand nanocaliper approach has the potential to provide insight into the roles of ligand nanoscale spatial distribution in membrane receptor-mediated signaling.” “For the very first time, we have been able to prove this hypothesis: The activity of EphA2 is influenced by how closely spaced the ligands are on the surrounding cells,” noted Dr. Högberg. “This is an important result in itself, but the point of our study is also that we have developed a method for examining how cells react to nearby cells in a controlled environment, using our custom DNA nanocalipers.” The researchers describe the cell communication as a form of Braille, where the cells somehow sense the protein patterns of nearby cells, and where the important thing is not only the amount of proteins, but to a great extent the distance between them as well. This study found that a cluster of proteins would communicate more actively than sparsely spaced proteins, even if the concentration was the same. “This is a model that can help us learn more about the importance of the spatial organization of proteins in the cell membrane to how cells communicate with each other, something that will hopefully pave the way for a brand new approach to pharmaceuticals in the long term,” added Ana Teixeira, a principal investigator at the department of cell and molecular biology. “Today, the function of the pharmaceuticals is often to completely block proteins or receptors, but it is possible that we should rather look at the proteins in their biological context, where the clustering and placement of various proteins are relevant factors for the effect of a drug. This is probably an area where there is important knowledge to obtain, and this is a way of doing it.”
<urn:uuid:cb085e8f-876c-425a-892c-0791e31fa660>
seed
This page provides information on some common nematodes, the list is not exhaustive. 1) Ascaris lumbricoides (Large Roundworm of Man) The egg of this nematode has a relatively thick shell wall, and is highly resistant to the external environment. In addition to the unembryonated eggs, as illustrated on the left here, embryonated eggs containing the L1 larval worms may also be seen, as on the right here. Infection with this roundworm is extremely common, with estimates of the annual incidence of infection being greater than 1500 million cases, or around one quarter of the worlds population. In addition to the species in man, Ascaris lumbricoides, a morphologically indistinguishable species Ascaris suum is found in the pig. Other related Genera include Parascaris in equines, and Toxascaris in a variety of domesticated animals. The adult Ascaris lumbricoides are large white, or pinkish-white, cylindrical roundworms, slightly narrower at the head. The more slender males measure between 10 to 30cm long and have a curved tail with two spicules, but no copulatory bursa. The females are very similar, being slightly larger at between 20 to 35cm long, a vulva approximately a third of the length of the body down from the head, and have a blunt tail. They are both characterised by having a smooth, finely striated, cuticle, and a mouth, which is characteristic of all of the Ascarids (e.g. Toxocara ), having three lips each equipped with small papillae. Internally they follow the generalised body plan of all nematodes, and have a cylindrical oesophagus opening into a flattened ribbon like intestine. The eggs are highly characteristic, with, for nematodes, thick shells consisting of a thick transparent inner shell which is covered in a thick, warty, albuminous coat. These parasites have a direct lifecycle, with no intermediate hosts. The adult parasite lives in the lumen of the small intestine of man, usually only feeding on the semi-digested contents of the gut, although there is some evidence that they can bite the intestinal mucous membrane and feed on blood and tissue fluids. The female parasite is highly prolific, laying an estimated 2 million eggs daily. In the intestine these only contain an unembryonated mass of cells, differentiation occurring outside the host. This requires a temperature less than 30°C, moisture and oxygen, before the development of the young L1 larvae after approximately 14 days. Eggs containing the L2 larvae take another week to develop, before they are infective to man, and may remain viable in the soil for many years if conditions are optimal. Infection occurs on ingestion of raw food, such as fruit or vegetables, that a contaminated with these infective eggs. The eggs then hatch in the small intestine, to release the L2 rhabditiform larvae (measuring approximately 250 by 15µm in size. These do not simply grow into the adult forms in the intestine, but must then undergo a migration through the body of their host. These L2 larvae penetrate the intestinal wall, entering the portal blood stream, and then migrate to the liver, then heart, then after between 1 to 7 days, the lungs. Here they moult twice on the way to form the L4 larvae, (measuring approximately 1.5mm long), then burrow out of the blood vessels, entering the bronchioles. From here they migrate up through the air passages of the lungs, to the trachea. They then enter the throat and are swallowed, finally ending up in the small intestine where they mature and mate, to complete their lifecycle. Pathology of Infection. The majority of infections (~85%) appear to be asymptomatic, in that there is no gross pathology seen. However the presence of these parasites appears to be associated with the same general failure to thrive in their hosts seen with many of these intestinal nematodes. In terms of more easily identified pathology, this may be divided into three areas; - Pathology Associated with the Ingestion and Migration of Larvae Severe symptoms of Ascaris infection may be associated with the migrating larvae, particularly in the lungs. If large numbers of these larvae are migrating through the lungs simultaneously this may give rise to a severe haemorrhagic pneumonia. More commonly, as is the case with most infections, the haemorrhages are smaller in scale, but still may lead to breathing difficulties, pneumonia and/or fever. A complication here is that many of the parasites proteins are highly allergenic. Because of this the presence of the migrating larvae in the lungs is often associated with allergic hypersensitivity reactions such as asthmatic attacks, pulmonary infiltration and urticaria and oedema of the lips. - Pathology Associated with Adult Parasites in the Intestine The most common symptoms of infection are due to the adult parasite, and consist of rather generalised digestive disorders, such as a vague abdominal discomfort, nausea, colic. These symptoms are dependent to some extent of the parasite burden of the host, which in severe cases may consist of many hundreds or even thousands of parasites, although these are extreme cases. In the case of these heavy infections the presence of many of these large parasites may contribute to malnutrition in the host, especially if the hosts (often children) are undernourished anyway. A more serious, and potentially fatal, condition may arise in these more heavy infections, where the mass of worms may block the intestine and need to be surgically removed. This may also occur sometimes on treatment for other intestinal nematodes such as hookworms, where the curative drug dose for these parasites irritates the ascarids. - Pathology due to "Wandering" Adults outside of the Intestine Adult parasites often leave the small intestine to enter other organs, (sometimes in response to anti-helminthic drugs used to treat other intestinal nematode infections), where they may cause various types of pathology, sometimes with severe consequences. For example adult Ascaris worms may migrate to the bile duct, which may then become blocked causing jaundice and a general interference in fat metabolism. Adult parasites may also migrate to the appendix, or through the intestinal wall, both conditions which may cause a fatal peritonitis as they may well carry intestinal bacteria to these sites. They may, alarmingly, sometimes migrate forward through the intestinal tract, to be either vomited up or emerging through the nose. More seriously, if they enter the trachea they may cause suffocation. 2) Enterobius vermicularis - (The Human Pinworm) The adult stage The human pinworm Enterobius vermicularis is a ubiquitous parasite of man, it being estimated that over 200 million people are infected annually. It is more common in the temperate regions of Western Europe and North America, (it being relatively rare in the tropics) and is found particularly in children. Samples of Caucasian children in the U.S.A. and Canada have shown incidences of infection of between 30% to 80%,with similar levels in Europe, and although these regions are the parasites strongholds, it may be found throughout the world, again often with high degrees of incidence. For example in parts of South America the incidence in schoolchildren may be as high as 60%. Interestingly non-Caucasians appear to be relatively resistant to infection with this nematode. As a species, and contrary to popular belief, E. vermicularis is entirely restricted to man, other animals harbouring related but distinct species that are non-infective to humans, although their fur may be contaminated by eggs from the human species if stroked by someone with eggs on their hands. In man anywhere where there are large numbers of children gathered together, (such as nurseries, play groups, orphanages etc.), especially if conditions are insanitary, are ready sources of infection, as one child may rapidly transmit the parasite his or her fellows. These creamy white coloured nematodes are relatively small, with the female measuring only approximately 10mm by 0.4mm wide. These females have a cuticular expansion at their anterior ends, with a long pointed tail. The male parasites, which are much less numerous than the females, are much smaller, measuring only up to 5mm long, and have a curved tail, with a small bursa like expansion, and a single spicule. The head has a mouth with three small lips The adult parasites live predominantly in the caecum. This illustration shows a transverse section of the adult parasite, in-situ in the intestine. The male and females mate, and the uteri of the females become filled with eggs. The gravid females (each containing up to 15 000 eggs) then migrate down the digestive tract to the anus. From here they make regular nocturnal migrations out of the anus, to the perianal region, where air contact stimulates them to lay their eggs, before retreating back into the rectum. Eventually the female die, their bodies disintegrating to release any remaining eggs. These eggs, which are clear and measure ~55 by 30µm, then mature to the infectious stage (containing an L1 larvae) over 4 to 6. To infect the host, typically these eggs must then be ingested, the eggs hatching in the duodenum. The eggs themselves are sticky, and have a characteristic shape, shared with all members of the group Oxyuridea, with an asymettrical form, flattened on one side, (see below); The larvae then undergo a series of moults, as they migrate down the digestive tract. The adult worms then mature in the caecum, before copulating to complete the cycle (typically 6 weeks). Occasionally the eggs hatch in the perianal region itself, the resulting L1 larvae being fully infective, crawling back through the anus, then migrating up the intestine to the caecum (retroinfection). Pathology of Infection. The majority of infections with this nematode are asymptomatic, although in some cases the emerging females and the sticky masses of eggs that they lay may causes irritation of the perianal region, which in some cases may be severe. As the females emerge at night this may give rise to sleep disturbances, and scratching of the affected perianal area transfers eggs to the fingers and under the finger nails. This in turn aids the transmission of the eggs, both back to the original host (autoinfection), and to other hosts. The larval nematode, encysted in muscle Trichinosis, caused by infection with this nematode, is a very cosmopoloitan disease, more common in temperate than tropical regions. The epidemiology of trichinosis is highly complex, due to the very low host specificity exhibited by the parasite, resulting in many zoonotic infections. In fact man is usually regarded as an accidental host for the parasite as under normal conditions the parasite reaches a dead end here. This is because to complete the lifecycle the flesh of the host containing the infective larvae must be ingested by another host. In addition rates of infection within populations may be difficult to estimate as the females are viviparous, are found within the intestinal mucosa, and the larvae produced are immediately carried via the circulatory and lymphatic systems to the muscle fibres within which they quickly encyst. Helminth infections are usually detected by the presence of eggs in the faeces, or in the case of the filarial nematodes migrating larvae in the skin or blood. Neither are seen in Truchinella infections, and unless the intensity of the infection is high enough to result in clinical disease the infection may be undetected. In fact estimates of rates of infection within populations are often based on autopsy surveys. These surveys have indicated a marked decrease in rates of infection over the last 40 years, when estimates of world wide infections were over 27 million. For example surveys carried out in the United States indicated prevalences of between 15 to 25% in the 1940's, whilst now the rates have been reduced to ~2%. Similar decreases have been reported in Europe. 4) Necator americanus and Ancylostoma duodenale (The Human Hookworms) The hookworms belong to the Order Strongylida, a very large order, of great interest as it contains many important pathogens of man and domesticated animals. This order is further subdivided into three Superfamilies, the Strongyloidea (the hookworms in man, discussed below in this page), and two related groups, the Superfamily Trichostrongyloidea, intestinal nematodes important in many domesticated animals (e.g. Haemonchus contortus in cattle and Nippostrongylus brasiliensis in rodents) and members of the Superfamily Metastrongyloidea (the lungworms, in domesticated animals) The Bucal Cavity of The Bucal Cavity of In man there are two species capable of causing intestinal infections, Ancylostoma duodenale native to parts of Southern Europe, North Africa and Northern Asia parts of Western South America, and Necator americanus in Central and Southern Africa, Southern Asia, Australia anf the Pacific Islands. These are very important human pathogens, it being estimated that there are 1200 million cases of hookworm infection in man annualy, of which ~100 million of which are symptomatic infections with accompanying anaemia (see below). In addition the larvae of several species of hookworms infecting domesticated animals may penetrate human skin, causing pathology even though they do not develop the adult parasites in man (see below). The adult parasites are small cylindrical worms, 0.5 - 1.5mm long (Ancylostoma duodenale being slightly larger than Necator americanus ). The posterior end of the male worm is equiped with a characteristic copulatory bursa, used to catch and hold the female nematode during mating. The females themselves have a vulva situated near the center of the body, slightly anterior in Necator and slightly posterior in Ancylostoma. The anterior end of the parasites are formed into a buccal capsule, absent in members of the other Strongylida superfamilies, by which the different genera and species within the group may be differentiated. For example members of the genus Necator have capsules equiped with cutting plates on the ventral margins, and within the capsule itself small dorsal teeth. In contrast members of the genus Ancylostoma have pairs of teeth on the ventral margin of the capsule. The number of teeth will vary between different species of Ancylostoma , but is usually between one and four pairs. The eggs are bluntly rounded, thin shelled, and are almost indistinguishable between the different species, measuring approximately 60 by 40 µm, the eggs of Ancylostoma being slightly larger than those of Necator. The lifecycles of all the hookworms are very similar. The eggs are passed in the faeces, once exposed to air they mature rapidly if conditions are right, with both moisture and warmth essential for development. When mature they hatch to liberate a rhabditiform (i.e. having an oesophagus where a thick anterior region is connected via a neckline region with a posterior bulb) L1 larvae after a few days. These larval nematodes feed on bacteria and organic material in the soil, where they live and grow for about two days before undergoing the first moult. After about five days more growth they moult again, to produce a much more slender L3 larvae. The L3 larvae has a much shorter oesophagus, is a non-feeding form, and is the infective form of the parasite. Infection takes place by penetration of the skin, for example when walking with bare feet over contaminated damp soil, followed by entry into the circulatory system. Here they are carried to the heart, and then lungs. Once in the lungs, they are too large to pass through the capillary bed therer. Instead they become trapped, and the burrow through the capillary epithelium, entering the air spaces. They then migrate up through bronchi and trachae, and are then swallowed. Once swallowed they pass into the intestine and bury themselves between the intestinal villi. Here they moult to form the L4 larvae, equiped with a buccal capsule allowing adherence to the gut wall. After about thirteen days post-infection they moult for the final time, producing immature adult worms. These the mature over three to four weeks (i.e. five to six weeks after infection), then mate and commense egg laying to complete the lifecycle. These parasites show a very high fecundity, female Necator americanus producing up to 10 000 eggs daily, whilst female Ancylostoma duodenale procuse up to 20 000 eggs daily. Pathology of Infection. The Pathology associated with hookworm infections may be divided roughly into two areas. Firstly the pathology associated with the presence of the adult parasite in the intestine, and secondly the pathology associated with the penetration of, and migration of the larval worms within, the skin. The Pathology Associated with the Parasite The adult hookworms attach themselves to the intestinal wall using their buccal capsules. Their prefered site of infestation is in the upper small intestine, but in very heavy infections (where many thousands of worms may be present) the parasites may spread down as far as the lower ileum. Once attached to the intestinal wall, the hookworm mouthparts penetrate blood vessels, and the parasites obtain nutrition by sucking blood. A single Necator americanus will take approximately 30 µl of blood daily, whilst the larger Ancylostoma duodenale will take up to 260 µl. The gross pathology of the disease is very dependant on the intensity of infection. Light infections appear asymptomatic, but in heavy infections, the continuous loss of blood leads to a chronic anaemia, with down to 2gm of haemoglobin per 100ml of blood in extreme cases. Experiments carried out in the 1930's showed that in dogs infected with 500 Ancylostoma caninum a similar species to the human parasite, nearly a pint of blood a day was lost. This leads to permenant loss of iron and many blood proteins as well as blood cells. This in turn has consequences for further production of erythrocytes, which have been shown to contain less haemoglobin, as well as being reduced in size and smaller in numbers. This form of anaemia may be directly fatal, but more often it induces more non-specific symptoms, the most noticable being the severe retardation in growth and development, both physical and mental, in infected children, and a general weakness and lassitude, often wrongly interpreted as "laziness". 1, Adult parasite attached to intestinal wall. 2, Migrating Larvae
<urn:uuid:e0aedd29-a0f1-4a28-99d2-84ca1ba56b31>
seed
FEMUR Areas of focus based on lecture: Head, Neck, Greater Trochanter, Lesser Trochanter, Linea Aspera (posterior) , Lateral Condyle, Medial Condyle, InterCondylar Notch(Fossa) (posterior) Part 5 - Appendicular Skeleton Part 5 of Part 8- Thursday Laboratory Review Ex. 18 Vertebrate Anatomy-Dr. Breslin Lecture: Appendicular Skeleton Appendicular Osteosarcoma in Dogs Large dogs are at risk for bone cancer. Bone cancer typically develops in middle to older age dogs, so it is often confused with arthritis and may not be caught in time. The Appendicular Skeleton labeling the bones of the appendicular skeleton Ex. 18: Vertebrate Anatomy-Skeletal System of a Frog Identify the bones and know which belong to the Axial and Appendicular skeletons Axial + Appendicular Skeleton Part I Recorded on April 24, 2009 using a Flip Video camcorder. AP1 F'09: Skeletal System: Anatomy (12) Dr. O's AP1 class, F'09: Skeletal System: Anatomy (12) 1. How many bones are there in the axial/appendicular parts of the skeleton? 2. How many bones are found in each part of the vertebral column? 3. Identify the location of each of these bones: humerus/femur/clavicle/maxillae/pubis. 4. Compare the number of phalanxes in the upper & lower appendages. BIO 165-I5 JWard Appendicular Skeleton Parts This is for my BIO 165-I5 Class Appendicular and Axial Recorded on October 29, 2010 using a Flip Video camcorder. Femur appendicular skeleton Coordination Exam : Cerebrocerebellum Cerebrocerebellum The 3rd subdivision of the cerebellum is the cerebrocerebellum. This system consists of connections from the cerebral cortex (predominantly motor) to the cerebellar hemispheres then back to the cerebral cortex. This system is important in motor planning. Dysfunction of the cerebellar hemispheres results in ataxia of speech (scanning dysarthria) and ataxia of the extremities (appendicular ataxia). It is important to remember that ataxia caused by disease of the cerebellar hemispheres will be ipsilateral to the dysfunctional hemisphere. The findings of appendicular ataxia are hypotonia, decomposition of movement, dysmetria, and difficulty with rapid alternating movements (dysdiadochokinesia). SACRUM and COCCYX Areas of focus based on lecture: Sacrum: vertebral bones S1-S5 = fuse together during development to = 1 bone. Body, sacral c***, posterior sacral foramen (plural: foramina) , median sacral crest , sacral promontory Coccyx: vertebral bones Co1-Co4= fuse together during development to = 1 bone TIBIA (Lower-Leg , Large Bone) Areas of focus based on lecture: Lateral Condyle, Medial Condyle, Tibial Tuberosity (anterior), Medial Malleous, Anterior Crest, Appendicular pt 1 The Appendicular Skeleton pt 1 Pectoral Girdle and Upper Limb - Appendicular Skeleton.mp4 More videos, downloadable study guides, class notes, live online extra-help classes, online practice tests and more at It's FREE to join. Cat Skeleton ID - Appendicular Skeleton How to identify bones in the forelimbs and hindlimbs of the cat. BIO165JCC-Appendicular skeleton BIO 165 naming the appendicular skeletal parts Part 4 - Appendicular Skeleton Part 4 of Part- Thursday Laboratory Review Appendicular pt 3 The Appendicular pt 3 Biology B: Unit 8 - Assessment 2 Preparation This 10 minute review of the key topics in unit 8 with help you prepare for this week's exam. Please take careful notes and use your text to review. Appendicular Muscles - Arm brief discription of where the arm muscles are located. Parts of the Pelvic girdle Me & Ricky identifying the os coxa & Me & Chad identifying it on the Appendicular skeleton. As follows: acetabulum obturator foramen ilium iliac crest iliac fossa anterior superior iliac spine articular surface anterior inferior iliac spine posterior superior iliac spine posterior inferior iliac spine sciatic notch ischium ischial tuberosity ischial spine pubis pubic symphysis Appendicular pt 2 The Appendicular Skeleton pt 2 Parts of the Scapula Me & Ricky identifying the Scapula & Me & Chad identifying it on the Appendicular skeleton. As follows: spine acromion process coracoid process superior border medial border lateral border supraspinous fossa infraspinous fossa acromioclavicular facet subscapular fossa glenoid cavity Human Skeleton The Human skeleton is very important for identify their parts or bones and functions, that animation help in to identify Axial and Appendicular human skeleton , it help in to identify Joints in our body. Its important for Zoologist. appendicularskeletonforMrA naming parts of our appendicular skeleton Axial + Appendicular Skeleton Part II Recorded on April 24, 2009 using a Flip Video camcorder. Appendicular Muscles - Dixon appendicular muscle 2 Biology: The Appendicular Skeleton for full video OS COXAE (INNOMINATE) The following are the areas of focus based on lecture notes: acetabelum , obturator foramen, iliac crest, anterior-superior iliac spine, posterior-superior iliac spine, greater sciatic notch, ischial tuberosity, symphysis pubis Differences between male and female: Pubic Brim(Pubic Inlet), Pubic Angle, Pelvic Outlet Part 3- Axial Skeleton Part 3 of Part 8 - Thursday Laboratory Review Abnormal Coordination Exam ; Finger-to-nose Finger-to-nose Under (hypometria) and over (hypermetria) shooting of a target (dysmetria) and the decomposition of movement (the breakdown of the movement into its parts with impaired timing and integration of muscle activity) are seen with appendicular ataxia. The Nervous & Skeletal Systems - High School Biology Videos - Video/DVD; Anatomy & Physiology Videos The Nervous & Skeletal Systems - Educational Biology Videos Learn about the Axial Skeleton, the Appendicular Skeleton, be able to Identify the Bones, Nervous System, Neurons - Virtual Science University Visit Us at - 5 skeleton 3 appendicular The appendicular skeleton
<urn:uuid:9e2b0536-3e78-44bc-98fe-9ba522e45c2e>
seed
The primitive heart tube is composed of an outer myocardial and an inner endocardial layer that will give rise to the cardiac valves and septa. Specification and differentiation of these two cell layers are among the earliest events in heart development, but the embryonic origins and genetic regulation of early endocardial development remain largely undefined. We have analyzed early endocardial development in the zebrafish using time-lapse confocal microscopy and show that the endocardium seems to originate from a region in the lateral plate mesoderm that will give rise to hematopoietic cells of the primitive myeloid lineage. Endocardial precursors appear to rapidly migrate to the site of heart tube formation, where they arrive prior to the bilateral myocardial primordia. Analysis of a newly discovered zebrafish Scl/Tal1 mutant showed an additional and previously undescribed role of this transcription factor during the development of the endocardium. In Scl/Tal1 mutant embryos, endocardial precursors are specified, but migration is severely defective and endocardial cells aggregate at the ventricular pole of the heart. We further show that the initial fusion of the bilateral myocardial precursor populations occurs independently of the endocardium and tal1 function. Our results suggest early separation of the two components of the primitive heart tube and imply Scl/Tal1 as an indispensable component of the molecular hierarchy that controls endocardium morphogenesis. In its earliest functional form, the embryonic heart of all vertebrates is a simple linear tube consisting of two cell types. An outer muscular cell layer called the myocardium surrounds an inner vascular cell layer called the endocardium that connects the heart to the vascular system. The integration of both cell types is an important step during heart development, but the formation of the endocardial component of the heart tube is poorly understood. Here, we analyze the formation of the endocardium in the zebrafish embryo and show using time-lapse imaging that it is a highly dynamic structure. In addition, we have identified a zebrafish mutant with a specific defect during endocardial development. This defect is caused by a mutation in T cell acute leukemia 1, a gene that—when misexpressed—causes many cases of childhood leukemias. Here, we show an additional role for this gene during heart development. In mutant embryos, both endocardial and myocardial precursors are specified, but integration of both cell types does not occur properly due to a defective migration of the endocardial precursors. Given the many interactions that occur between the endocardium and the myocardium, our results will provide a more comprehensive understanding of heart development. Citation: Bussmann J, Bakkers J, Schulte-Merker S (2007) Early Endocardial Morphogenesis Requires Scl/Tal1. PLoS Genet 3(8): e140. doi:10.1371/journal.pgen.0030140 Editor: Mary Mullins, University of Pennsylvania School of Medicine, United States of America Received: January 15, 2007; Accepted: July 9, 2007; Published: August 24, 2007 Copyright: © 2007 Bussmann et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: J. Bussman is supported by a Boehringer Ingelheim Fonds Ph.D. scholarship. Competing interests: The authors have declared that no competing interests exist. Abbreviations: bHLH, basic helix-loop-helix; hpf, hours post fertilization; wt, wild type The primitive heart tube is the first functional organ in the vertebrate embryo and is composed of a myocardial tube lined by an inner endothelial layer called the endocardium. Significant progress has been made towards elucidating the morphogenetic events and transcriptional control underlying patterning of the myocardium . However, the morphogenetic events and the transcription factors involved in early development of the endocardium remain largely undefined. In fact, the specific embryonic origin of the future endocardial cells and their relationship with the future myocardial cells is still unclear . Results obtained using in vitro differentiation of embryonic stem cells and analysis of different mesodermal or cardiac cell lines [4,5] suggest the development of both cardiac lineages from bipotential progenitors during heart field formation and, indeed, bipotential cells have been identified in the early mouse embryo at a single-cell level . However, lineage tracing experiments in the avian embryo have shown restricted myocardial or endocardial potential of precardiac cells, with both types of precursors intermingled during their migration towards the site of the primitive heart tube . A similar sequestration of endocardial and myocardial cells is observed in the zebrafish where cells at late blastula stages give rise to both endocardial and myocardial cells . These cells also give rise to the head vasculature and an anterior population of hematopoietic cells of the primitive myeloid lineage and become restricted in their potential to form either endocardium or myocardium during early gastrulation stages . After endocardial and myocardial precursors have been specified, complex morphogenetic movements occur that shape the primitive heart tube. Again, most studies have focused on the morphogenesis of the myocardium. One of the early markers of myocardial differentiation is the transcription factor nkx2.5, expression of which is detected in the zebrafish in bilateral populations starting at the ten-somite stage . Subsequently, these bilateral myocardial fields merge in the midline at the 18-somite stage, with the first contacts occurring in a relative posterior position .The myocardial precursors posterior to the first junction as well as the most anterior portions then come in contact with each other, creating a ring with an central circle devoid of myocardial cells that has been suggested to contain the endocardial precursors [10,11]. Around these stages, expression of cardiac contractile genes, such as such as cardiac myosin light chain (myl7), starts . Finally, myocardial cells move to the left, and during a complex and poorly understood process, convert the myocardial disc into the primitive heart tube . The basic helix-loop-helix (bHLH) transcription factor Scl/Tal1, hereafter referred to as Tal1, an oncogene originally identified in childhood leukemias, is expressed in endothelial, endocardial and hematopoietic, but not myocardial cells during early murine development . Tal1 forms a transcriptional complex containing Lmo2 to regulate expression of target genes. Gain of function of Tal1 in combination with Lmo2 leads to an expansion of endothelial and hematopoietic cells at the expense of other nonaxial mesodermal components, including myocardial precursors , suggesting an important role for these genes during endocardial/myocardial specification and differentiation. Gene targeting studies in the mouse have revealed that Tal1 is essential for the formation of all blood lineages. Tal1 loss of function also leads to considerably less well-defined defects during endothelial development [15–17]. Knockdown of tal1 in zebrafish showed a conserved function in generating hematopoietic cells. In addition, tal1 knockdown embryos display defects during arterial versus venous differentiation [18–20]. Here, we analyze early endocardial morphogenesis in the zebrafish, and show that the endocardium appears to arise from the anterior lateral plate mesoderm, from a region that will give rise to hematopoietic cells of the primitive myeloid lineage. From there, presumptive pre-endocardial cells seem to rapidly migrate posterior and towards the midline, where they are later joined by the bilateral myocardial primordia. The myocardial primordia then fuse over the endocardial layer to form the disc that subsequently gives rise to the primitive heart tube. We show that tal1 has an additional and previously undescribed role in endocardial development. In tal1 mutants, migration of endocardial but not myocardial precursors is defective, leading to severe outflow tract stenosis and defects in the formation of the primitive heart tube. Our results suggest a separate origin for the two components of the primitive heart tube and show an indispensable role for Scl/Tal1 during endocardial morphogenesis. Materials and Methods The tal1t21384 allele was isolated in a genetic screen for ENU-induced mutations affecting vascular development . The transgenic line TG(fli1a:gfp)y1 was obtained from Brant Weinstein (Bethesda, Maryland). The transgenic line TG(vegfr4: gfp)s843 was obtained from Didier Stainier (San Fransisco, California). Meiotic Mapping and Sequencing The t21384 mutation was mapped to Chromosome 22 using standard simple sequence length polymorphism mapping. For sequencing of the tal1 gene, genomic DNA was extracted from 12 wild-type (wt) and 12 mutant tal1t21384 embryos. The three coding exons of the zebrafish tal1 gene were PCR amplified and sequenced on both strands. A mutation in the third coding exon was confirmed in a panel of 580 single mutant and 53 single sibling embryos using PCR with primers tal1_ex3_fw (5′-CAACATTAATGCACATCTTGG-3′) and tal1-ex3-rev (5′-TCTACCTGGTGGTCTTCCTC-3′) and sequenced using primer tal1_ex3_seq (5′-TGGGCGAACAATCAATTTAG-3′). Full-Length Sequence of kdr, flt1, and flt4 We used a unidirectionally cloned, oligo dT–primed SMART cDNA library constructed from 2- and 3-d-old zebrafish larva using Advantage2 DNA Polymerase Mix (Clontech, http://www.clontech.com/). Primers used for identification of kdr, flt1, and flt4 3′ and 5′ ends are available upon request. Phylogenetic and Synteny Analysis The MEGA3 package was used for phylogenetic analysis . Amino acid sequences were aligned with ClustalW (the resulting alignment is available upon request) and a phylogenetic tree was constructed using a neighbor-joining algorithm. The resulting tree was tested using 1,000 bootstrap resamplings. Pairwise distances were calculated with the PAM substitution matrix. Identification of vertebrate VEGF receptors and synteny analysis was performed using the Ensembl database (http://www.ensembl.org), release 44, April 2007. Whole-Mount In Situ Hybridization and Immunohistochemistry Single and double in situ hybridizations were performed essentially as described , except that labeled probes were purified using NucleoSpin RNA clean-up columns (Machery-Nagel, https://www.macherey-nagel.com/) and embryos were transferred to 15-mm Costar Netwells (Corning, http://www.corning.com/) after hybridization. The substrate used in the second color-reaction was INT ([4-iodophenyl]-3-[4-nitrophenyl]-5-phenyl-tetrazoliumchloride) (Roche Applied Science, http://www.roche-applied-science.com/). Previously described probes used were tal1, gata1, spi1, runx1, hey2, flk1/kdra, ephrin B2a, dab2, nkx2.5, amhc, and cmlc2 [9,12,26–34]. Probes for flt4, kdr, and flt1 were transcribed from the 5′ part of the respective cDNAs that code for the extracellular domain of the proteins. Immunohistochemistry using anti-GFP (Torrey Pines Biolabs, http://www.chemokine.com/) and anti-tropomyosin (CH1; Sigma, http://www.sigmaaldrich.com/) antibodies was performed as described . Secondary antibodies were Alexa555-anti-rabbit and Alexa647-anti-mouse (Invitrogen, http://www.invitrogen.com/). Full length tal1 mRNA was transcribed from linearized plasmids using the mMessage-mMachine kit (Ambion, http://www.ambion.com/) as described . mRNA injections were done in a volume of 1 nl at the one-cell stage in Milli-Q (http://www.millipore.com/) water at a concentration of 20 pg/embryo. Embryos were mounted in 0.25% agarose in a six-well culture plate with a cover slip on the bottom of the well and imaged with a Leica TCS SP confocal microscope (Leica Microsystems, http://www.leica-microsystems.com/) using a 40× dry objective (vegfr4:gfp) or 10× dry objective with 2× digital zoom (fli1a:gfp). Maximal z-projections of 40–50 slices at 4 μm per slice were compiled using ImageJ software (http://rsb.info.nih.gov/ij/). Time points were recorded every 5 min (fli1a:gfp) or 7.5 min (vegfr4:gfp) for 6–8 h. A heated stage was employed to keep the embryos at approximately 28.5 °C. A Truncating Mutation in Zebrafish tal1 In a large-scale forward genetic screen , we identified a mutant (t21384) that showed a severe reduction in endothelial alkaline phosphatase activity at 4 d post fertilization, particularly in the region of the dorsal aorta. Until 26 h post fertilization (hpf), the general morphology of mutant embryos was indistinguishable from that of their wt siblings. At this time however, the pericardium became edematous (Figure 1A), even though heartbeat was initiated normally. No erythrocytes were observed and embryos consequently lacked circulation. To identify the molecular lesion responsible for the t21384 phenotype, we used simple sequence length polymorphism mapping to position the mutation on Chromosome 22. Single embryo mapping using a limited number of mutant embryos (n = 96) positioned the mutation at 28.0 cM (+/− 0.5 cM), between simple sequence length polymorphism markers z21515 and z938, closely linked to the tal1 gene (27.89 cM)). The combination of endothelial and hematopoietic defects seen in t21384 mutants had also been observed in mouse Tal1 knockouts , as well as in a morpholino knockdown of tal1 in zebrafish [18–20]. Therefore, tal1 was a very likely candidate gene. Sequencing of the three coding exons of the zebrafish tal1 gene in mutant and sibling embryos revealed an A→T transversion in the third and final coding exon (Figure 1B). The mutation resulted in a K→ochre nonsense mutation at position 183 of the protein. The resulting putative protein had a deletion of the highly conserved bHLH domain, including the DNA binding basic region (Figure 1C). Figure 1. A Truncating Mutation in the Zebrafish tal1 Gene (A) Live micrographs of wt and tal1t21384 mutant embryos at 56h. Mutant embryos have slightly curved tails, smaller eyes, and pericardial edema. (B) Electropherograms of wt (+/+), heterozygous (+/−) and tal1t21384 homozygous genomic DNA from bp 7–17 of the third tal1 coding exon. The A→T transversion (arrow) mutates lysine 183 to an ochre (TAA) stop codon (K183X). (C) Schematic diagram of the primary Tal1 protein structure. The DNA-binding bHLH domain is indicated in yellow. The tal1t21384 mutation deletes the Tal1 protein C terminus including the complete helix-loop-helix domain (D) The tal1t21384 mutation leads to severe defects in intersomitic vessel formation at 36 hpf, visualized in a fli1a:gfp transgenic background. Early endothelial pattering can be rescued by injecting 20 pg wt tal1 mRNA into mutant embryos. (E) Loss of tal1 expression in tal1t21384 mutant embryos. In situ hybridization for tal1 shows a loss of tal1 expression in erythroid cells and a reduction of tal1 expression in the spinal chord. tal1 expression is retained at normal levels in some hematopoietic/endothelial progenitor cells in the tail (arrowheads) and in some mesenchymal cells of unknown nature just dorsal to the yolk extension (arrows). These bilateral cell populations lie ventral to the pronephric tubules and lateral to the gut endoderm (insets). Also note the loss of tal1 expression in the ventral wall of the dorsal aorta containing, in wt embryos, the definitive hematopoietic stem cells. (F) Loss of hematopoiesis in tal1t21384 mutant embryos. In situ hybridization for hematopoietic markers at 26 hpf shows a loss of primitive erythroid cells (gata1), primitive myeloid cells (pu.1), and definitive hematopoietic stem cells (runx1).doi:10.1371/journal.pgen.0030140.g001 Using this SNP, we also examined the degree of linkage and observed no recombinants for this mutation in 580 mutant embryos tested (genetic distance < 0.09 cM). In genotyping embryos after in situ hybridization or immunohistochemistry, we have never observed a mutant genotype in a sibling embryo. We conclude that the phenotype is fully penetrant (see also Figure 2). To further confirm that the mutation of tal1 was the defect underlying the t21384 mutant phenotype, we forced expression of tal1 by injecting 20 pg of capped wt mRNA into embryos from a cross between heterozygous individuals carrying the fli1a:gfp transgene to visualize endothelial cells. In uninjected mutant embryos, vascular patterning was severely affected, leading to a loss of intersomitic vessels. The defect in vascular patterning was rescued by injection of wt tal1 mRNA (Figure 1D), as was the hematopoietic and the endocardial defect (see below). Figure 2. Defects in Endocardial Development in tal1t21384 Mutants (A) Confocal images of the head region from embryos transgenic for fli1a:gfp at 32 hpf. Arrows indicate the location of the endocardium. In the wt endocardium, a lumen is visible, which is absent in tal1t21384 mutants. (B) Transversal sections of wt and tal1t21384 mutant heart ventricles at 56 hpf stained with hematoxylin/eosin. In wt embryos, the endocardium forms an epithelium attached to the myocardium, and blood cells are visible within the heart tube. In the tal121384 mutant heart, the endocardium fails to form an epithelium and is only present in the heart ventricle. (C) Rescue of endocardial defects. Maximum z-projection of a stack of confocal images of embryos carrying the vegfr4:gfp transgene. Embryos were divided into three classes based on their endocardial phenotype. Class 1: wt length of the endocardium with normal tube formation (wt phenotype). Class 2: wt length of the endocardium but no tube formation (intermediate phenotype). Class 3: short endocardium with no tube formed (mutant phenotype). All embryos homozygous or heterozygous for the wt allele are in class 1, whereas all embryos homozygous for the tal1t21384 allele are in class 3, showing the high penetrance of this phenotype. In 43 out of 54 embryos homozygous for the tal1t21384 allele (83%), the endocardial phenotype can be rescued to class 1 through injection of low amounts of wt tal1 mRNA (20 pg per embryo), whereas most of the remaining (8/54, 15%) display an intermediate phenotype (class 2).doi:10.1371/journal.pgen.0030140.g002 Although the t21384 mutation resided in the final coding exon of the tal1 gene, and therefore the mRNA was unlikely to be subjected to nonsense-mediated decay , we observed reduced expression of tal1 mRNA at 28 hpf, as revealed by in situ hybridization (Figure 1E). Expression of tal1 was retained in the ventral mesenchyme of the tail, a region that has been hypothesized to contain hematopoietic progenitors . In addition, we consistently observed a bilateral population of tal1 expressing cells above the yolk extension. These cells resided in the mesenchyme ventral to the pronephric tubule and lateral to the developing gut tube (Figure 1E, inset). To confirm the loss of hematopoietic lineages, we performed in situ hybridization for genes that are required for the formation of the two major primitive hematopoietic lineages in the early embryo as well as those that are required for the formation of definitive hematopoietic stem cells. Consistent with data obtained by morpholino injection [18–20], we showed that the development of the primitive erythroid lineage (gata1) as well as the formation of definitive hematopoietic stem cells (runx1) was lost. Similarly, development of the primitive myeloid lineage was severely affected (Figure 1F), although some spi1 expressing cells were observed in the head at 28 h (unpublished data). Based on the tight linkage, the stop codon in the tal1 gene, and the rescue of all phenotypic aspects in mutant embryos through injection of wt mRNA, we conclude that t21384 encodes tal1 and hereafter refer to the mutant allele as tal1t21384. Tal1 Mutants Display Defects in Early Endocardial Development In addition to the defects observed in the hematopoietic and endothelial lineages (Figure 1D and see below), we observed in tal1t21384 mutants a strong defect in the morphogenesis of the heart that has not been previously described in the zebrafish or mouse. Although myocardial differentiation does not appear to be affected and heartbeat is initiated normally, the endocardial cells in the tal1t21384 mutants do not form a single cell layer lining the myocardium, and do not form atrial endocardium. Instead, endocardial cells aggregate at the arterial pole of the heart, leading to complete ventricular stenosis (Figure 2A and 2B). Later, concentric growth of the myocardium is defective and no heart valves are formed, consistent with an important role for the endocardium in these processes [37,38]. This phenotype was always found in combination with the loss of primitive hematopoiesis, and both aspects of the phenotype could be efficiently rescued by injecting wt tal1 mRNA, showing the specificity of both phenotypes to the loss of tal1 function (Figure 2C). Early Endocardial Precursors Express a VEGF Receptor Gene That Has Been Lost during Mammalian Evolution To be able to examine the early endocardial defects observed in tal1t21384 mutants, we aimed to develop several markers for endocardial morphogenesis. A previous study used the expression of a zebrafish vascular endothelial growth factor (VEGF) receptor homologue to delineate early endocardial development in the zebrafish , which was proposed to be the zebrafish Flk1/KDR orthologue. However, the murine Flk1 gene is expressed in multiple nonendothelial cells [39,40], and Flk1+ progenitors give rise to beating cardiomyocytes . The observed endothelial-specific expression of zebrafish flk1/kdra at early developmental stages is therefore surprising. This prompted us to reassess the phylogenetic relationship of the zebrafish VEGF receptors. We obtained the full length sequence for all zebrafish VEGF receptors: flk1/kdra, flt1, flt4 and kdrb [31,41–43], of which flk1/kdra and kdrb were proposed to be the result of a whole-genome duplication event in teleost fish . However, we identified likely orthologues for all four zebrafish genes, including both flk1/kdra and kdrb in the genomes of Xenopus tropicalis, chick, platypus, and opossum. Phylogenetic and synteny analysis of the three human VEGF receptors FLT1, KDR, and FLT4 and the four receptors of zebrafish, chick, and opossum both supported the hypothesis that zebrafish flk1/kdra and the novel chick and opossum VEGF receptor genes in fact represent a separate VEGF receptor class that was lost during mammalian evolution (Figure 3A and 3B). In addition, this showed that the gene previously published as kdrb is in fact the KDR orthologue. Given these results, we propose the use of VEGF receptor 4 to designate the novel class of vertebrate VEGF receptors and will use vegfr4 instead of flk1/kdra and kdr instead of kdrb for the remainder of this manuscript. Figure 3. Zebrafish vegfr4(flk1) Represents a New Class of Nonmammalian VEGF Receptors (A) Rooted neighbor-joining tree of vertebrate VEGF receptors. Different colors represent different classes of VEGF receptors. Note the clear separation of zebrafish and other teleost fish vegfr4, and the novel frog, chick, opossum, and platypus VEGF receptor genes from the three other classes of vertebrate VEGF receptors. Bootstrap value of VEGFR4 node is indicated (1,000 times out of 1,000 iterations). (B) Synteny analysis of vertebrate VEGF receptors using human, mouse, chick, and zebrafish genome assemblies. Colors used are similar to those in (A). Question marks represent missing orthologous genes, potentially due to gaps in the chick genome assembly. Clear syntenic relationships of all vertebrate VEGF receptors were observed, indicating duplication from a primordial gene cluster in a primitive chordate. Mammals have lost the Zebrafish vegfr4 orthologue, coinciding with the emergence of the X-chromosome inactivation center XIST in the same genomic locus. (C) Expression of zebrafish VEGF receptors in the heart region at 26 hpf, dorsal view, anterior to the top. Three VEGF receptors are expressed in the endocardium: vegfr4, kdr, and flt1 (arrowheads). Note the absence of flt4 expression in the endocardium (asterisk). At this stage, both vegfr4 and kdr are expressed at high levels in all endothelial cells, whereas flt1 and flt4 have a mutually exclusive expression pattern: flt1 is expressed in all arteries (AA1, first aortic arch; LDA, lateral dorsal aorta; and PICA, primitive internal carotid artery) whereas flt4 is expressed in all veins (MCeV, middle cerebral vein [which is largely out of the focal plane of this picture]). PHBC, primordial hindbrain channel; PMBC, primordial midbrain channel.doi:10.1371/journal.pgen.0030140.g003 Expression analyses revealed that kdr was the first VEGF receptor expressed during development (see Figure S1). vefgr4, flt1, and kdr but not flt4 are expressed in the endocardium of the heart at 26 hpf (Figure 3C). Importantly, and consistent with earlier results, the expression of vegfr4 was restricted to endothelial precursors and blood vessels during all stages examined and could be used as a marker for endocardial development. Early Endocardial Development in the Zebrafish To understand the endocardial defects in tal1t21384 mutants, we first characterized normal endocardial development in the zebrafish. We used time-lapse confocal microscopy in the vegfr4:gfp transgenic background to analyze the morphogenetic movements during early endocardial development. Consistent with vegfr4 mRNA expression, this transgene was expressed in the all endothelial cells including the endocardium, but not the myocardium, of the primitive heart tube and therefore was used to follow endocardial precursors during their migration, although it is important to note here that higher resolution tracking of single cells or fate mapping will be required to definitely address the origin and migratory path of endocardial precursors. The earliest time point at which fluorescence was detected was between the 10- and 12-somite stage (14–15 hpf). At this stage, the transgene was expressed in the anterior and posterior lateral plate mesoderm, representing the endothelial and hematopoietic precursors. Between the 12- and 14-somite stage (15–16 hpf), the anterior lateral plate mesoderm moved medially with more gfp-positive cells in the posterior region of the anterior lateral plate mesoderm. This region later formed part of the head vasculature, the primitive myeloid cells, the anterior dorsal aortas and, importantly, the endocardium and aortic arches (see Video S1 and Figure 4A). The coexpression of vegfr4:gfp in both the endocardium as well as the head vasculature suggests a respective origin of these cell populations within the anterior lateral plate mesoderm. Both our marker analysis and the time-lapse imaging are consistent with that notion; however, in the absence of single-cell tracking, it is not completely conclusive. Both of these lineages seem to arise as bilateral populations at the 14-somite stage (dashed lines in Figure 4A). The presumed endocardial precursors then rapidly migrated posterior and fused between the 15- and 18-somite stage (16.5–18 hpf). Fusion of these endocardial precursor populations initiated at the anterior side, progressed in a posterior direction, and was finished by 18 hpf. At the same time, further posterior migration of endocardial cells occurred. Finally, a complex leftward movement of the endocardial primordium occurred to position the endocardial component of the primary heart tube at the left side of the embryo between the 22-and 26-somite stage. Although most endocardial cells moved slowly and as an epithelial sheet at these stages, additional single vegfr4:gfp-positive cells that are separate from the endocardium were rapidly moving lateral and anterior (Figure 4A, red dashed line). Figure 4. Migration of Endocardial Precursors in wt Embryos (A) Embryos transgenic for vegfr4:gfp were subjected to time-lapse confocal microscopy, revealing rapid endocardial migration prior to heart tube formation. A movie demonstrating this process can be viewed in the Video S1. Twelve individual frames from this movie at indicated stages and time-points are shown seen in (A), with white dashed lines indicating the position of (pre-) endocardial cells. Frames 1–5 show the fusion of the bilateral endocardial precursors between the 14- and 16-somite stages. Frames 5–8 indicate the posterior migration of endocardial cells to cover the lateral and posterior regions of the cardiac disc; note the posterior migration of the paired lateral dorsal aortas between the 18- and 22-somite stages (white arrows). The apex of the endocardial disc (pink dashed line) appears to be constricted below the aortic arches between the 18- and 22-somite stages (frames 7–10). A leftward movement of the endocardium is visible between the 20- and 26-somite stages (frames 9–12), and is coinciding with the appearance of single vegfr4:gfp-positive cells lateral to the remaining endocardium (red dashed line). Also note the migration of the venous posterior hindbrain channels (red arrows) between the 22- and 26-somite stage (frames 10–12). (B) Relative locations of endocardial and myocardial precursors during fusion of endocardial precursor populations, revealed by two-color in situ hybridization showing cdh5 (blue, endocardium) and nkx2.5 (red, myocardium) expression. The bilateral populations of endocardial precursors (arrows) are located anterior to the myocardial precursors until the 14-somite stage, then migrate medially and posteriorly to assume a position in between the myocardial precursors at the 18-somite stage. (C) Embryos transgenic for fli1a:gfp were subjected to time-lapse confocal microscopy, revealing slow medial movement of gfp-positive cells between the six- and 12-somite stage (frames 1–4) and rapid migration starting at the 14-somite stage. A movie demonstrating this process can be viewed in the Video S2.doi:10.1371/journal.pgen.0030140.g004 To assess the relative positions of endocardial and myocardial precursors during fusion of the endocardial precursor populations, we performed two-color in situ hybridization using precisely staged embryos and small time intervals. The endocardial precursors as well as precursors of the head vasculature were marked by expression of cadherin 5 (cdh5) and the myocardium by expression of nkx2.5 (Figure 4B). In this way, we showed that VE-cadherin expressing cells, which include the endocardial precursors, do not express nkx2.5 and are found immediately anterior to the myocardial precursors in the lateral plate mesoderm at the 14-somite stage, when migration begins. In addition, these data confirmed that endocardial and not myocardial cells are the first to arrive in the midline. Finally, to confirm the results obtained in the vegfr4:gfp transgenic line and to analyze the movements of endocardial precursors before the 14-somite stage, we used the fli1a:gfp transgenic line that also expresses a transgene specifically in endothelial cells, including the endocardium (Figure 4C and Video S2). GFP fluorescence in this line can be detected from before the six-somite stage and thus can be used to follow tissue movements prior to endocardial migration. In addition to the endocardium, this transgene is expressed in the developing primitive myeloid population, as well as the pharyngeal mesoderm. Time-lapse imaging of this line confirmed the results obtained with the vegfr4:gfp transgenic line between the 14- and 20-somite stage. Before this stage, the anterior lateral plate mesoderm gradually moved medially. This analysis also suggests a close association of endocardial and primitive myeloid cell populations, as macrophages (identified based on their motility and being fli1a:gpf positive) start migrating from the 12-somite stage from a region that also includes the endocardial precursors. Early Endocardial Development in the tal1t21384 Mutant To assess the timing of endocardial defects in tal1t21384 mutants, we analyzed mutant embryos carrying the vegfr4:gfp transgene and performed time-lapse confocal microscopy, similar to wt embryos (Figure 5A and Video S3). In mutant embryos, fluorescence was first detected between the ten- and 12-somite stage, indistinguishable from wt embryos. The bilateral presumptive endocardial precursors initiated migration at the 14-somite stage normally, but there was a severe defect in the continued posterior migration of endocardial precursors. Whereas wt endocardial precursors migrated rapidly and formed a single cell layer in the embryonic midline, endocardial precursors in tal1t21384 mutants aggregated in an anterior position (Figure 5A and Video S3). Figure 5. Migration of Endocardial Precursors and Heart Tube Formation in wt tal1t21384 Mutant Embryos (A) tal121384 mutant embryos transgenic for vegfr4:gfp were subjected to time-lapse confocal microscopy, revealing defects during early endocardial precursor migration. A movie demonstrating this process can be viewed in the Video S3. Six individual frames from this movie are shown in (A). Whereas the initial formation of bilateral endocardial precursors is not affected, posterior migration is disturbed, and endocardial precursors remain attached in a relative anterior position. Note that migration of the paired lateral dorsal aortae (arrows) proceeds normally. (B) tal1 expression in endocardial precursors. Two-color in situ hybridization revealing tal1 (blue) and nkx2.5 (red) expression in wt embryos at the 14- and 18-somite stage. tal1 expression is observed in endocardial but not myocardial precursors during their posterior migration (arrowheads). (C) Expression of the arterial markers flt1 and hey2 is retained in tal1t21384 mutant endocardium, but severely reduced in endothelium, as revealed by in situ hybridization. Dorsal view of flt1 and hey2 expression in the head, anterior to the top, and lateral view of flt1 and hey2 in the tail (28 hpf). In wt embryos, flt1 expression is observed in all head arteries, the aortic arches and the endocardium (arrow). In tal1t21384 mutant embryos, expression of flt1 is observed in a few remaining head arteries and the aortic arches. High levels are seen in the endocardium (arrow). In wt embryos, flt1 expression is observed in the dorsal aorta and the developing intersegmental vessels. In the tail of tal1t21384 mutant embryos, expression of flt1 is abolished, except for a few remaining cells that express flt1 at low levels (arrowheads). In wt embryos, hey2 expression is observed in the endocardium and the aortic arches and in some parts of the brain and spinal chord. In tal1t21384 mutant embryos, expression in the endocardium (arrow) is increased. In wt embryos, hey2 expression is observed in the dorsal aorta and the developing intersegmental vessels, spinal chord neurons, and in ventral and dorsal cells of the somites. In tal1t21384mutant embryos, expression in the dorsal aorta and intersegmental vessels is severely reduced, although some anterior intersegmental vessels and aortic cells retain low levels of hey2 expression (arrowheads).doi:10.1371/journal.pgen.0030140.g005 Tal1 Expression in Zebrafish Endocardial Precursors Previous reports have indicated Tal1 expression in the endocardium of the mouse . However, a previous study did not identify tal1 expression in endocardial cells in the zebrafish . We analyzed tal1 expression during early endocardial development using two-color in situ hybridization and showed tal1 expression during all stages of endocardial cell migration (10–20-somite stage) (Figure 5B), consistent with a cell-autonomous role for tal1 in this process. In the trunk, tal1 expression was detected in angioblasts and primitive erythrocytes. Expression is downregulated during endothelial differentiation and only maintained in erythrocytes. Similarly, expression of tal1 in the endocardium was downregulated during early migration and maintained in the primitive myeloid lineage. Endocardial Differentiation in tal1t21384 Mutants Results obtained using morpholino knockdown have shown an important role for Tal1 in the differentiation of arterial and venous endothelial cells [18–20]. This suggested that failure of endocardial differentiation could be the primary defect in tal1t21384 mutant hearts. Using the genetic mutant, we reassessed the role of tal1 during arteriovenous differentiation and showed that indeed most arterial gene expression was lost and venous gene expression expanded (Figures 5C and S2). In addition, we resolved a difference between previous data [18,19] and showed migration of angioblasts to the region of the dorsal aorta not to be affected. Endothelial cells were present in their correct location ventral to the notochord (Figure S3). Many arterial markers, such as flt1, hey2, and dll4 but not venous markers such as flt4 and dab2 are also expressed in the endocardium at 24–28 hpf, suggesting common regulation of gene expression. Importantly, we observed that tal1 differentially regulates arterial and endocardial gene differentiation as expression of flt1 and hey2 is severely reduced in the head and trunk arteries, but expression is maintained in the endocardium (Figure 5C). In the absence of more specific endocardial marker genes, these results suggest that defects during migration rather than during endocardial differentiation are the cause of the heart defects observed in tal1t21384 mutant embryos. Assembly of Endocardium and Myocardium during Early Heart Tube Formation Heartbeat initiated normally in tal1t21384 mutant embryos, indicating normal myocardial differentiation in the absence of an endocardial lining. Indeed, differentiation of atrial and ventricular myocardium was observed at 28 hpf (Figure 6A). However, at this stage, the atrial side of the heart appeared abnormally enlarged (bracket in Figure 6A), suggesting early defects during heart tube morphogenesis. Therefore, we performed two-color immunohistochemistry labeling the myocardium (tropomyosin) and endocardium (vegfr4:gfp) in single embryos. Using this method, we confirmed the observation that fusion of the myocardial primordia is initiated in a relative posterior position to form a butterfly-like shape that changes to a horseshoe-shaped myocardium through fusion of the most posterior part of the myocardial primordium between the 18- and 20-somite stages (Figure 6B) . At the onset of myocardial fusion, most of the endocardium is positioned medially to the bilateral myocardial precursors. Connection between the endocardium and the lateral dorsal aortae is persistent throughout development and occurs through the developing first aortic arches. These cells represent the endocardial cells located dorsal to the myocardium as described by Trinh et al. (brackets in Figure 6B). Fusion of the myocardium occurs dorsal to the endocardium and is initiated just anterior to the most posterior endocardial cells (arrow in Figure 6B). Indeed, around this stage, movement of endocardial cells occurs ventrally relative to the myocardial cells, and by the 20-somite stage, most endocardial cells are located ventral to the myocardium—especially in the lateral and posterior regions of the heart (Figure 6C). However, the region of the endocardium that connects to the aortic arches is still positioned medially to the myocardium. Figure 6. Defective Heart Tube Formation in tal1t21384 Mutant Embryos Despite Normal Fusion of Bilateral Myocardial Precursor Populations (A) Chamber differentiation in wt and tal1t21384 mutant embryos revealed by two-color in situ hybridization showing amhc (atrium) and cmlc2 (atrium and ventricle) at 28 hpf. Chamber differentiation proceeds normal in mutant embryos, but the primary heart tube is malformed with an enlarged atrial inflow region (brackets). (B–D). Two-color immunohistochemistry using anti-tropomyosin (red, myocardium) and anti-gfp (green, vegfr4, endocardium) antibodies. Images were generated as maximum projections of confocal z-stacks (ventral views, anterior to the top). Some yolk platelets show intense autofluorescence at the wavelength used for anti-tropomyosin detection (647 nm). (B) Heart morphogenesis during heart field fusion in wt embryos. The bilateral heart fields fuse dorsal to the endocardium between the 18- and 20-somite stage. Fusion is initiated in the posterior region of the heart. At the 18-somite stage, tropomyosin-positive cells are located ventral to the first aortic arches. (C) Endocardial precursors are ventral to the myocardium in the lateral and posterior regions of the heart. Endocardial and myocardial sheets are closely associated, as relative positions were only revealed after deconvolution of confocal stacks. Four deconvolved images of the confocal image stack in (B) are shown in (C). (D) In tal1t21384 mutant embryos, initial fusion of the myocardial precursor populations occurs normally, although endocardial precursors are absent in the posterior region of the heart field. (E) Primary heart tube formation from myocardial and endocardial precursors in wt and tal1t21384 mutant embryos revealed by two-color in situ hybridization showing cdh5 (endocardium, blue) and cmlc2 (myocardium, red) expression. Dorsal views, anterior is to the top. At the 22-somite stage, wt embryos have formed a cardiac disc, with endocardial cells underlying the circular myocardial primordium. The medial endocardium within the ring of myocardium forms the connection between the endocardium and the aortic arches. In tal1t21384 mutant embryos, anterior closure of the myocardial primordium is defective due to aggregation of endocardial precursors. At the 26-somite stage, wt embryos have formed the primary heart tube and rhythmical contractions begin. In tal1t21384 mutant embryos, heart tube formation is delayed. By this stage, myocardial cells have completed fusion formation at the anterior side of the cardiac disc. (F) Schematic overview of fusion of myocardial precursor regions and heart tube formation in wt and tal1t21384 mutant embryos. Endocardium and aortic arches are in green, myocardium in red.doi:10.1371/journal.pgen.0030140.g006 In tal1t21384 mutant embryos, fusion of the bilateral myocardial precursors is initiated normally, and thus appears independent of the endocardium or tal1 function. Endocardial cells at the 22-somite stage remain located anterior to the myocardium, leading to defects in the anterior fusion of the myocardium, which in wt embryos has occurred at this stage (Figure 6D and 6E). Anterior fusion of the myocardium is delayed until the 26-somite stage (Figure 6E). The medial region of the myocardium at the 22-somite stage gives rise to the ventricle of the heart, whereas the lateral and posterior myocardium gives rise to the atrium . Therefore, absence of atrial endocardial cells can be explained by an early migration defect of endocardial precursors. A summary of heart tube assembly from endocardial and myocardial precursors in wt and tal1t21384 mutant embryos is provided in Figure 6F. A Second Genetic Vertebrate Model for Loss of tal1 Function We have identified a truncating mutation in the zebrafish tal1 gene, making this the second species for which a genetic mutation for this important hematopoietic transcription factor is available. The tal1t21384 allele contains a truncating mutation that deletes the conserved bHLH domain of the protein. Functional interaction of Tal1 and Lmo2 is required for the early stages of vascular and hematopoietic lineages in the zebrafish , and this interaction occurs at the second helix, a region that is deleted by the tal1t21384 mutation. Moreover, expression of only the helix-loop-helix domain was sufficient to rescue hematopoietic and endothelial development in tal1 morphants , indicating that the N-terminal part of the Tal1 protein is dispensable for early hematopoietic and endothelial development. Therefore, we conclude that the tal1t21384 mutation leads to a complete loss of tal1 function during early cardiovascular and hematopoietic development. Loss of a Fourth VEGF Receptor Gene during Mammalian Evolution We used a transgenic line under the control of the zebrafish vegfr4 promoter to follow endocardial development. Surprisingly, vegfr4 represents a fourth VEGF receptor class with orthologues in all vertebrates—except placental mammals—for which sufficient genome information is available. This fourth VEGF receptor class most likely arose as a consequence of the two proposed whole-genome duplication events that occurred before vertebrate divergence . This is evidenced by the phylogenetic tree of the vertebrate VEGF receptors, which places the emergence of this fourth class prior to the divergence of teleost fish and terrestrial vertebrates. Linkage of class III receptor tyrosine kinases (which includes the VEGF receptors) to a caudal-type homeobox gene is conserved in the basal chordate amphioxus , suggesting that this configuration represents the ancestral state. Our data strongly suggest the loss of a fourth VEGF receptor within the mammalian lineage, as we identified an orthologue of vegfr4 in the genome of the opossum Monodelphis domestica, tightly linked to the Cdx4 gene. This implies that the loss of a mammalian vegfr4 orthologue occurred relatively recently—after the divergence of the placental (eutherian) and marsupial mammals ~180 million years ago, but before the mammalian radiation. Interestingly, the human and mouse Cdx4 genes are found adjacent to the XIST noncoding RNA that regulates X-chromosome inactivation. XIST originated through pseudogenization of the LNX3 gene , of which an orthologue is also found in the zebrafish vegfr4-cdx4 locus. The differences between marsupial and placental mammals in the mechanism of X-chromosome inactivation , together with the observation that Cdx4 has a role during placental development , and the absence of a fourth VEGF receptor in placental mammals present in other vertebrate species (this study) suggest that the cdx4–vegfr4 locus has been an important hotspot during mammalian evolution. The observation that noneutherian vertebrates have a fourth VEGF receptor has consequences for understanding of the relationship of tal1 and the VEGF receptors. The murine KDR orthologue, Flk1, was found to function upstream of Tal1, and Tal1 expression was not detected in Flk1−/− embryos . However, the putative zebrafish flk1 orthologue was found to be downstream of tal1 during zebrafish hematopoietic development , leading to controversy in the understanding of their interactions. Here, we redefine the orthology among the different VEGF receptor classes, and importantly, we identify the true Flk1/KDR orthologue in the zebrafish (kdr). We find that early nonendothelial kdr expression, which starts 3–4 h before the onset of tal1 expression, is not affected in tal1t21384 mutants. Therefore, we conclude that tal1 does not function as an essential factor for the initiation of kdr expression. Rather, maintenance of high-level endothelial expression of this gene is dependent on tal1. Morphogenesis and Embryonic Origins of the Endocardium Lough and Sugi reviewed data on endocardial morphogenesis in higher vertebrates and proposed that endocardial and myocardial precursors are both derived from the same anatomic location: the cardiogenic mesoderm. From there, endocardial precursors migrate as single cells in between the myocardium and the underlying endoderm. Subsequently, endocardial precursors assemble in two bilateral vascular chords that later on fuse to form the inner lining of the primitive heart tube. Our data show that endocardial morphogenesis in the zebrafish embryo differs in at least two important aspects from that found in higher vertebrates. First, endocardial precursors appear to be derived from a distinct anterior region of the cardiogenic mesoderm, and require posterior migration to the site of heart tube formation. This difference is potentially due to the lack of dynamic observations of endocardial precursor migration in higher vertebrates. Indeed, some data indicate that in higher vertebrates, the endocardium arises from a specific region of the cardiogenic mesoderm . Our results give a first look at the origin of endocardial cells during early development. However, to obtain conclusive evidence regarding the cell movements and origins of the endocardium, higher-resolution fate mapping and cell tracing will be required. Second, zebrafish endocardial precursors do not assemble into bilateral vascular chords, but form a sheet medial to the myocardial precursors. The bilayered disc that is formed through fusion of the myocardial primordia over the endocardium is then converted into the primitive heart tube. This last difference is potentially due to differences in developmental timing between fusion of bilateral primordia and heart tube morphogenesis in zebrafish and higher vertebrates. Using the vegfr4:gfp and fli1a:gfp lines and two-color in situ hybridization, we suggest that the endocardium of the primitive heart tube forms from two bilateral precursor populations that are located immediately anterior to the bilateral myocardial precursor populations that express nkx2.5. The finding that the endocardium might arise from a region anterior to the myocardial primordia is surprising, given the lineage-tracing experiments performed by Serbedzija and coworkers . In their study, cells were labeled anterior to the nkx2.5-expressing myocardial primordia at the ten- to 12-somite stage. At 33 hpf, these cells were found to contribute to the “head mesenchyme” and no cells were found in the heart. One explanation for this finding is that only cells were labeled immediately anterior to the nkx2.5-expressing region. Our time-lapse imaging indicates that these cells give rise to the aortic arches and the head vasculature, whereas endocardial precursors originate from a position within the part of the anterior lateral plate mesoderm that is slightly more anterior and ventral. Therefore, Serbedzija and coworkers most likely have not labeled the cells that we show here constitute the endocardial precursors. Recently, Kattman et al. used the differentiation of murine embryonic stem cells to show the existence of a cell population expressing Flk1, with the potential to form both myocardial and endocardial cells in vitro. Our data indicate that in the zebrafish embryo, there are anatomically separate populations of endocardial and myocardial precursor cells during early developmental stages. We show that during normal development both lineages undergo different morphogenetic movements that are suggestive of early lineage separation of endocardium and myocardium. However, this does not imply that these precursors are restricted to one particular differentiation fate at this stage, and given alternative (ex vivo) cues they can still contribute to both lineages. Our data also indicate that anatomically, the endocardial precursors are found in close association with a particular subset of the hematopoietic lineage in the zebrafish: the anterior population of primitive myeloid cells. Lineage tracing experiments will identify whether there exists a lineage relationship with a common endocardial–myeloid progenitor, or whether endocardial and myeloid precursors are simply intermingled during one stage of their development. While resolving this specific question is beyond the scope of this paper, it is worth noting that we observed migrating cells with macrophage morphology that have low levels of vegfr4:gfp expression originating from the same bilateral populations of cells as those giving rise to the endocardium and the head vasculature (Figure 4A). This transgene is not expressed in differentiated primitive myeloid cells, and so the signal potentially represents residual gfp protein from a previous stage of their development. How the endocardial lining of the primitive heart tube becomes established is one of the least-understood aspects of cardiac morphogenesis . In recent years, studies on zebrafish embryos have provided significant insight into the genetic regulation of myocardial development. We show here that a similar approach can be taken to study endocardial development. Given the many interactions between the myocardium and endocardium, both during development as well as in adult physiology and disease , our findings will provide a more comprehensive understanding of the morphogenesis and genetic regulation of the heart. Figure S1. kdr Expression during Gastrulation At the tailbud (TB) stage, kdr but not vegfr4 is expressed. Expression of kdr is diffuse, but highest around the tailbud and near the embryonic axis. (4.3 MB EPS) Figure S2. Endothelial Gene Expression in the Trunk Trunk expression of different markers in wt and tal1t21384 mutant embryos at 24–30 hpf for general endothelial (cdh5, vegfr4), venous (flt4, dab2), and arterial (notch3, ephrinB2) differentiation. In mutant embryos, expression of arterial marker genes is reduced or absent, whereas the expression domain of venous marker genes is expanded to the dorsal aorta. (8.1 MB EPS) Figure S3. Presence of Endothelial Cells in the Region of the Dorsal Aorta Immunohistochemistry using anti-GFP antibody in the fli1a:gfp background; brown labeling indicates gfp-positive endothelial cells. Although no luminized vessels in the trunk are formed in tal1t21384 mutants, endothelial cells are present in the location of the dorsal aorta (arrows), and some intersegmental vessels form (arrowhead). (7.2 MB EPS) Video S1. Time-Lapse Movie of Normal Endocardial Development in the vegfr4:gfp Transgenic Background (4.9 MB MOV) Video S2. Time-Lapse Movie of Normal Endocardial Development in the fli1a:gfp Transgenic Background (2.8 MB MOV) Video S3. Time-Lapse Movie of Endocardial Development in tal1t21384 Mutants in the vegfr4:gfp Transgenic Background (2.2 MB MOV) The National Center for Biotechnology Information (NCBI) GenBank (http://www.ncbi.nlm.nih.gov/sites/entrez?db=Nucleotide) accession numbers for the nucleotide sequences of the zebrafish VEGF receptors are kdr, AY523999; flt1, AY524000; and flt4, AY5234001. The NCBI GenBank and Ensembl (http://www.ensembl.org/) accession numbers for the genes and gene products (other than the zebrafish VEGF receptors noted above) used for phylogeny reconstruction are Flk1 (mouse/Mus musculus), NP_034742; FLT1 (chick/Gallus gallus), NP_989583; FLT1 (human/Homo sapiens), NP_002010; Flt1 (mouse), NP_034358; FLT4 (chick), XP_414600; FLT4 (human), NP_002011; Flt4 (mouse), NP_032055; KDR (frog/Xenopus tropicalis), ENSXETG00000021061; KDR (human), NP_002244; kdr (medaka/Oryzias latipes), GENSCAN00000045289; KDR (opossum/Monodelphis domestica), encoded by ENSMODG00000020673; KDR (platypus/Ornithorhynchus anatinus), ENSOANG00000003802; kdr (stickleback/Gasterosteus aculeatus), ENSGACG00000014277; kdr (tetraodon/Tetraodon nigrividis), GSTENG00032761001; KDR (chick), NP_001004368; VEGFR4 (chick), XP_420292; VEGFR4 (frog), GENSCAN00000039479; vegfr4 (fugu/Takifugu rubripes), SINFRUG00000131563; vegfr4 (medaka), ENSORLG00000001940; VEGFR4 (opossum), ENSMODG00000020842; VEGFR4 (platypus), ENSOANG00000002222; vegfr4 (stickleback), ENSGACG00000017117; vegfr4 (tetraodon), GSTENG00031225001; and vegfr4 (zebrafish), NP_571547. We thank Christine Mummery and Ben Hogan for editorial review, Youri Adolfs and Jeroen Korving for immunohistochemistry and tissue processing, Josi Peterson-Maduro and Hinrich Habeck for cloning the zebrafish VEGF receptors, and the Tübingen 2000 screen consortium for identifying the t21374 allele. The allele was identified while S. Schulte-Merker was working at Exelixis Deutschland GmbH. We also thank members of the Schulte-Merker laboratory for comments and scientific discussions. J. Bussman conceived and performed experiments and wrote the paper. J. Bakkers conceived the experiments in Figure 5. S. Schulte-Merker conceived experiments and wrote the paper. - 1. Olson EN (2006) Gene regulatory networks in the evolution and development of the heart. Science 313: 1922–1927. - 2. Lough J, Sugi Y (2000) Endoderm and heart development. Dev Dyn 217: 327–342. - 3. Kattman SJ, Huber TL, Keller GM (2006) Multipotent flk-1(+) cardiovascular progenitor cells give rise to the cardiomyocyte, endothelial, and vascular smooth muscle lineages. Dev Cell 11: 723–732. - 4. Schultheiss TM, Burch JB, Lassar AB (1997) A role for bone morphogenetic proteins in the induction of cardiac myogenesis. Genes Dev 11: 451–462. - 5. Nemer G, Nemer M (2002) Cooperative interaction between GATA5 and NF-ATc regulates endothelial-endocardial differentiation of cardiogenic cells. Development 129: 4045–4055. - 6. Cohen-Gould L, Mikawa T (1996) The fate diversity of mesodermal cells within the heart field during chicken early embryogenesis. Dev Biol 177: 265–273. - 7. Lee RK, Stainier DY, Weinstein BM, Fishman MC (1994) Cardiovascular development in the zebrafish. II. Endocardial progenitors are sequestered within the heart field. Development 120: 3361–3366. - 8. Keegan BR, Meyer D, Yelon D (2004) Organization of cardiac chamber progenitors in the zebrafish blastula. Development 131: 3081–3091. - 9. Chen JN, Fishman MC (1996) Zebrafish tinman homolog demarcates the heart field and initiates myocardial differentiation. Development 122: 3809–3816. - 10. Glickman NS, Yelon D (2002) Cardiac development in zebrafish: Coordination of form and function. Semin Cell Dev Biol 13: 507–513. - 11. Stainier DY, Lee RK, Fishman MC (1993) Cardiovascular development in the zebrafish. I. Myocardial fate map and heart tube formation. Development 119: 31–40. - 12. Yelon D, Horne SA, Stainier DY (1999) Restricted expression of cardiac myosin genes reveals regulated aspects of heart tube assembly in zebrafish. Dev Biol 214: 23–37. - 13. Drake CJ, Fleming PA (2000) Vasculogenesis in the day 6.5 to 9.5 mouse embryo. Blood 95: 1671–1679. - 14. Gering M, Yamada Y, Rabbitts TH, Patient RK (2003) Lmo2 and Scl/Tal1 convert non-axial mesoderm into haemangioblasts which differentiate into endothelial cells in the absence of Gata1. Development 130: 6187–6199. - 15. Shivdasani RA, Mayer EL, Orkin SH (1995) Absence of blood formation in mice lacking the T-cell leukaemia oncoprotein tal-1/SCL. Nature 373: 432–434. - 16. Robb L, Lyons I, Li R, Hartley L, Kontgen F, et al. (1995) Absence of yolk sac hematopoiesis from mice with a targeted disruption of the scl gene. Proc Natl Acad Sci U S A 92: 7075–7079. - 17. Visvader JE, Fujiwara Y, Orkin SH (1998) Unsuspected role for the T-cell leukemia protein SCL/tal-1 in vascular development. Genes Dev 12: 473–479. - 18. Patterson LJ, Gering M, Patient R (2005) Scl is required for dorsal aorta as well as blood formation in zebrafish embryos. Blood 105: 3502–3511. - 19. Dooley KA, Davidson AJ, Zon LI (2005) Zebrafish scl functions independently in hematopoietic and endothelial development. Dev Biol 277: 522–536. - 20. Juarez MA, Su F, Chun S, Kiel MJ, Lyons SE (2005) Distinct roles for SCL in erythroid specification and maturation in zebrafish. J Biol Chem 280: 41636–41644. - 21. Habeck H, Odenthal J, Walderich B, Maischein H, Schulte-Merker S (2002) Analysis of a zebrafish VEGF receptor mutant reveals specific disruption of angiogenesis. Curr Biol 12: 1405–1412. - 22. Lawson ND, Weinstein BM (2002) In vivo imaging of embryonic vascular development using transgenic zebrafish. Dev Biol 248: 307–318. - 23. Jin SW, Beis D, Mitchell T, Chen JN, Stainier DY (2005) Cellular and molecular analyses of vascular tube and lumen formation in zebrafish. Development 132: 5199–5209. - 24. Kumar S, Tamura K, Nei M (2004) MEGA3: Integrated software for molecular evolutionary genetics analysis and sequence alignment. Brief Bioinform 5: 150–163. - 25. Prince VE, Moens CB, Kimmel CB, Ho RK (1998) Zebrafish hox genes: Expression in the hindbrain region of wild-type and mutants of the segmentation gene, valentino. Development 125: 393–406. - 26. Liao EC, Paw BH, Oates AC, Pratt SJ, Postlethwait JH, et al. (1998) SCL/Tal-1 transcription factor acts downstream of cloche to specify hematopoietic and vascular progenitors in zebrafish. Genes Dev 12: 621–626. - 27. Detrich HW 3rd, Kieran MW, Chan FY, Barone LM, Yee K, et al. (1995) Intraembryonic hematopoietic cell migration during vertebrate development. Proc Natl Acad Sci U S A 92: 10713–10717. - 28. Lieschke GJ, Oates AC, Paw BH, Thompson MA, Hall NE, et al. (2002) Zebrafish SPI-1 (PU.1) marks a site of myeloid development independent of primitive erythropoiesis: implications for axial patterning. Dev Biol 246: 274–295. - 29. Kalev-Zylinska ML, Horsfield JA, Flores MV, Postlethwait JH, Vitas MR, et al. (2002) Runx1 is required for zebrafish blood and vessel development and expression of a human RUNX1-CBF2T1 transgene advances a model for studies of leukemogenesis. Development 129: 2015–2030. - 30. Zhong TP, Rosenberg M, Mohideen MA, Weinstein B, Fishman MC (2000) gridlock, an HLH gene required for assembly of the aorta in zebrafish. Science 287: 1820–1824. - 31. Liao W, Bisgrove BW, Sawyer H, Hug B, Bell B, et al. (1997) The zebrafish gene cloche acts upstream of a flk-1 homologue to regulate endothelial cell differentiation. Development 124: 381–389. - 32. Lawson ND, Scheer N, Pham VN, Kim CH, Chitnis AB, et al. (2001) Notch signaling is required for arterial-venous differentiation during embryonic vascular development. Development 128: 3675–3683. - 33. Song HD, Sun XJ, Deng M, Zhang GW, Zhou Y, et al. (2004) Hematopoietic gene expression profile in zebrafish kidney marrow. Proc Natl Acad Sci U S A 101: 16240–16245. - 34. Berdougo E, Coleman H, Lee DH, Stainier DY, Yelon D (2003) Mutation of weak atrium/atrial myosin heavy chain disrupts atrial function and influences ventricular morphogenesis in zebrafish. Development 130: 6121–6129. - 35. Dong PD, Munson CA, Norton W, Crosnier C, Pan X, et al. (2007) Fgf10 regulates hepatopancreatic ductal system patterning and differentiation. Nat Genet 39: 397–402. - 36. Lejeune F, Maquat LE (2005) Mechanistic links between nonsense-mediated mRNA decay and pre-mRNA splicing in mammalian cells. Curr Opin Cell Biol 17: 309–315. - 37. Mably JD, Mohideen MA, Burns CG, Chen JN, Fishman MC (2003) heart of glass regulates the concentric growth of the heart in zebrafish. Curr Biol 13: 2138–2147. - 38. Chang CP, Neilson JR, Bayle JH, Gestwicki JE, Kuo A, et al. (2004) A field of myocardial-endocardial NFAT signaling underlies heart valve morphogenesis. Cell 118: 649–663. - 39. Ema M, Takahashi S, Rossant J (2006) Deletion of the selection cassette, but not cis-acting elements, in targeted Flk1-lacZ allele reveals Flk1 expression in multipotent mesodermal progenitors. Blood 107: 111–117. - 40. Motoike T, Markham DW, Rossant J, Sato TN (2003) Evidence for novel fate of Flk1+ progenitor: contribution to muscle lineage. Genesis 35: 153–159. - 41. Rottbauer W, Just S, Wessels G, Trano N, Most P, et al. (2005) VEGF-PLCgamma1 pathway controls cardiac contractility in the embryonic heart. Genes Dev 19: 1624–1634. - 42. Thompson MA, Ransom DG, Pratt SJ, MacLennan H, Kieran MW, et al. (1998) The cloche and spadetail genes differentially affect hematopoiesis and vasculogenesis. Dev Biol 197: 248–269. - 43. Covassin LD, Villefranc JA, Kacergis MC, Weinstein BM, Lawson ND (2006) Distinct genetic interactions between multiple Vegf receptors are required for development of different blood vessel types in zebrafish. Proc Natl Acad Sci U S A 103: 6554–6559. - 44. Trinh LA, Stainier DY (2004) Fibronectin regulates epithelial organization during myocardial migration in zebrafish. Dev Cell 6: 371–382. - 45. Patterson LJ, Gering M, Eckfeldt CE, Green AR, Verfaillie CM, et al. (2006) The transcription factors, Scl and Lmo2, act together during development of the haemangioblast in zebrafish. Blood 109: 2389–2398. - 46. Dehal P, Boore JL (2005) Two rounds of whole genome duplication in the ancestral vertebrate. PLoS Biol 3: e314.. doi:10.1371/journal.pbio.0030314. - 47. Ferrier DE, Dewar K, Cook A, Chang JL, Hill-Force A, et al. (2005) The chordate ParaHox cluster. Curr Biol 15: R820–R822. - 48. Duret L, Chureau C, Samain S, Weissenbach J, Avner P (2006) The Xist RNA gene evolved in eutherians by pseudogenization of a protein-coding gene. Science 312: 1653–1655. - 49. Cooper DW (1993) X-inactivation in marsupials and monotremes. Seminars in Developmental Biology 4: 117–128. - 50. van Nes J, de Graaff W, Lebrin F, Gerhard M, Beck F, et al. (2006) The Cdx4 mutation affects axial development and reveals an essential role of Cdx genes in the ontogenesis of the placental labyrinth in mice. Development 133: 419–428. - 51. Ema M, Faloon P, Zhang WJ, Hirashima M, Reid T, et al. (2003) Combinatorial effects of Flk1 and Tal1 on vascular and hematopoietic development in the mouse. Genes Dev 17: 380–393. - 52. Sugi Y, Markwald RR (1996) Formation and early morphogenesis of endocardial endothelial precursor cells and the role of endoderm. Dev Biol 175: 66–83. - 53. Serbedzija GN, Chen JN, Fishman MC (1998) Regulation in the heart field of zebrafish. Development 125: 1095–1101. - 54. Hsieh PC, Davis ME, Lisowski LK, Lee RT (2006) Endothelial-cardiomyocyte interactions in cardiac development and repair. Annu Rev Physiol 68: 51–66.
<urn:uuid:79001eb8-542a-4b07-a428-03803e54f0c5>
seed
Gonadotropins may be used in women with ovulation disorders or in women undergoing ART procedures, such as IVF. Their role during these treatments is to stimulate the ovaries to produce multiple eggs. Follicle stimulating hormone (FSH) and luteinizing hormone (LH) are known as gonadotropins. They are the primary hormones responsible for regulating the female reproductive cycle. The pituitary gland produces and releases both hormones in the brain. Some women have a hard time conceiving because the pituitary gland either fails to produce the appropriate amount of gonadotropins needed for ovulation, or fails to release gonadotropins at the appropriate time during the reproductive cycle. If the right amounts of gonadotropins are not released at the right time, mature follicles may not develop and ovulation may not occur. Commercially produced Gonadotropins are derived from two sources: biological (urine-based) or manufactured (recombinant). Urine-based products have been used to enhance fertility for over thirty years. These products, often referred to as human menopausal gonadotropins (hMG), are purified medications extracted from the urine of postmenopausal women through a complex bio-technical engineering process. Recombinant products only contains FSH. Back to top Factors to Consider Important factors to know when considering gonadotropin therapy include: - Route of administration. Recombinant products are administered by subcutaneous (under the skin) injection. With the exception of one medication, urine-based products are administered by intramuscular (within a muscle) injection. Subcutaneous injections allow a woman to self-inject the medication, whereas intramuscular injections require the assistance of a health care provider or partner. - Dose. The purity of recombinant gonadotropins may allow women to use lower doses for a shorter duration of time compared to the urine-based products. - Variability. Both urine-based and recombinant products are carefully manufactured to assure that they do not contain harmful contaminants. However, it is difficult to extract or separate FSH and LH from urinary proteins in urine-based products. This leads to products that contain lower amounts of FSH and LH. In contrast, the manufacturing process that uses recombinant technology producers purer FSH. - Side effects. The frequency of side effects are similar in patients receiving both recombinant and urine-based gonadotropins. The side effects include headache, breast pain and OHSS characterized by abdominal pain, pelvic pain, bloating, nausea, vomiting, and weight gain. However, because of their purity, recombinant products may be associated with fewer injection-site related side effects, such as pain, redness, itching or swelling. - Effectiveness. Both urine-based and recombinant products have been proven to be safe and effective for women who fail to ovulate due to pituitary failure and in women undergoing ART procedures, such as IVF. Back to top Gonadotropin Plus Intrauterine Insemination Gonadotropin in conjunction with intrauterine insemination cycle has a 15 to 20 percent success rate per cycle for unexplained infertility of three years duration, and a 15 percent chance of multiples (including higher order multiples). The medication, depending on the amount used in one cycle and number of ultrasounds required, usually costs $2,000 to $3,000 per cycle. Back to top
<urn:uuid:faa3bd84-2fb1-4949-b127-a04ab1712fbf>
seed
Revista Panamericana de Salud Pública Print version ISSN 1020-4989 MERTENS, T. E. and LOW-BEER, D.. HIV and AIDS: where is the epidemic going?. Rev Panam Salud Publica [online]. 1997, vol.1, n.3, pp. 220-229. ISSN 1020-4989. http://dx.doi.org/10.1590/S1020-49891997000300009. Routine surveillance of HIV (human immunodeficiency virus) infection and AIDS has been established over the past decade in many countries around the world. HIV estimates derived from empirical data are essential to the assessment of the HIV situation in different parts of the world and trends are used in tracking the development of regional epidemics, thereby keeping intervention activities focused on realities. As of the end of 1995, and following an extensive country-by-country review of HIV/AIDS data, a cumulative total of 6 million AIDS cases were estimated to have occurred in adults and children worldwide and currently 20.1 million adults are estimated to be alive and infected with HIV or have AIDS. Of the total prevalent HIV infections, the majority remain concentrated in eastern, central and southern Africa, but the epidemic is evolving with spread of infection from urban to rural areas, as well as to West and South Africa, India and Southeast Asia, and to a lesser extent¾with proportional shifts to heterosexual infections¾in North America, Western Europe and Latin America. While the longer-term dimensions of the HIV epidemic at global level cannnot be forecast with confidence, WHO currently projects a cumulative total of close to 40 million HIV infections in men, women and children by the year 2000. By that time, the male:female ratio of new infections will be close to 1:1. Recent trends indicate that HIV prevalence levels may be stabilizing or even decreasing among pregnant women in southern Zaire and parts of Uganda, among military recruits aged 21 in Thailand, and in some populations of northern Europe and the USA. While these changes may take place as part of the intrinsic dynamic of the epidemic, there is some evidence that declines in HIV prevalence are related to declines in HIV incidence which are, at least partly, due to prevention efforts. The challenge of surveillance and evaluation methods is now to identify the ingredients of success which may reveal a glimmer of hope.
<urn:uuid:922edbb1-0c75-42e1-b23b-29071fa45ac0>
seed
Moderate drinking has long been associated with a number of health benefits, including a lower risk of cardiovascular problems, dementia, and Type 2 diabetes. But due to the design of most studies on drinking, there has been doubt about whether alcohol is actually the cause of these outcomes — no large randomized, controlled study of drinking has ever been conducted, so preexisting differences might account for both the decision to drink and the better health outcomes seen in moderate drinkers. However, a new study on alcohol and Type 2 diabetes prevention seems to show not only that alcohol is associated with a lower diabetes risk, but that this link cannot be explained by the healthier lifestyles of moderate drinkers. The study, published on the Web site of the American Journal of Clinical Nutrition, examined both the drinking habits and four other characteristics of over 35,000 Dutch adults between the ages of 20 and 70: body weight, physical activity, smoking status, and diet. A low-risk category was defined for each of these traits. After an average follow-up period of just over 10 years, it was found that moderate drinkers in each of the four low-risk categories had a lower risk of developing Type 2 diabetes than did nondrinkers (moderate drinking was defined as between 5 and 15 grams of alcohol daily for women and between 5 and 30 for men, estimated based on self-reporting). After controlling for baseline factors such as age and sex, this risk was 65% lower for those with normal body weight, 35% lower for the physically active (at least 30 minutes of exercise a day, on average), 46% lower for nonsmokers, and 43% lower for those with a healthy diet (based on the DASH diet). Among study subjects who fit in at least three of the low-risk groups, the risk of developing Type 2 diabetes was 44% lower for moderate drinkers. While this study appears to dispel the claim that moderate drinkers only have better health outcomes because of their all-around healthy behavior, it still cannot rule out the possibility that study subjects who chose not to drink might have a common, unknown trait that increased their risk of developing diabetes. Only the traits controlled for in the study can be ruled out as causes of differences in diabetes risk, which means that alcohol might not actually be the reason — or not the only reason — for the lower risk seen in moderate drinkers. When deciding whether or not to drink, it is also important to remember that alcohol has known risks. In addition to its potential for addiction and abuse, in some people alcohol has a significant effect on the blood triglyceride level. What are the most important factors in your decision to drink or not to drink? Although most experts currently do not recommend starting to drink alcohol for its potential health benefits, if this advice changed, would you start drinking if you do not? If medical opinion turned against alcohol, would you stop drinking if you do? Should the US government support a randomized, controlled study of moderate alcohol consumption to learn more conclusively about its effects? Leave a comment below!
<urn:uuid:b17b3963-5576-4853-bd49-99283fb3e455>
seed
For patients with moderate to severe chronic kidney disease (CKD), treatment with activated vitamin D may reduce the risk of death by approximately one-fourth, suggests a study in the August Journal of the American Society of Nephrology. Many patients with advanced CKD take the drug calcitriol, an oral form of activated vitamin D, to treat elevated levels of parathyroid hormone. "Although activated vitamin D is known to influence many biological processes, previous clinical knowledge is limited to its effect on parathyroid hormone levels," explains Dr. Bryan Kestenbaum of the University of Washington in Seattle, one of the study authors. The study included 1,418 patients who had stage 3 to 4 CKD, which means moderately to severely reduced kidney function. All patients also had high parathyroid hormone levels (hyperparathyroidism), which can contribute to weakening of the bones in CKD. The researchers identified one group of patients who were being treated with calcitriol to lower their parathyroid hormone levels and another group who were not receiving calcitriol. During a two-year follow-up period, mortality rates were compared for patients who were and were not taking calcitriol. "We then adjusted for differences in age, kidney function, parathyroid hormone levels, other illnesses, and other medications," says Dr. Kestenbaum. In the adjusted analysis, the overall risk of death was about 26 percent lower for patients taking calcitriol. Patients on calcitriol were also less likely to develop end-stage renal disease, requiring dialysis to replace lost kidney function. Overall, treatment with calcitriol was associated with a 20 percent reduction in the risk of either death or dialysis. The reduction in mortality with calcitriol was unrelated to its effect on parathyroid hormone levels. "Recently, there has been an increased focus on the effects of vitamin D beyond those on bone health," Dr. Kestenbaum comments. "Vitamin D deficiency has been associated with risk factors for cardiovascular disease, such as high blood pressure, diabetes, and inflammation." Previous studies have suggested that treatment with intravenous vitamin D can improve survival in patients on hemodialysis. The new results suggest that treatment with oral activated vitamin D may also improve survival in patients with CKD who do not yet require dialysis. "Randomized clinical trials are needed to test the hypothesis that vitamin D therapy can improve cardiovascular health and survival in CKD," Dr. Kestenbaum adds. "Future studies should also examine the role of non-activated vitamin D, which is less expensive and less toxic." The study has some important limitations, including a lack of data on other factors that may have affected survival in patients taking calcitriol. Also, since the study included mainly older, white men, the results may not apply to younger, more ethnically diverse populations with CKD. Source: American Society of Nephrology Explore further: Drugs used to treat lung disease work with the body clock
<urn:uuid:c503b6ac-65ec-4959-899c-3c4be7b5930d>
seed
The SARS virus set alarm bells ringing across the world when it first appeared in 2002, but now a review of the effectiveness of the treatments used against it has found no evidence that any of them worked. The review was commissioned by the World Health Organization and has been published in PLoS Medicine. Severe acute respiratory syndrome (SARS) is caused by a virus; the main symptoms are pneumonia and fever. The virus is passed on when people sneeze or cough. In 2003 there were over 8,000 cases and 774 deaths worldwide. The situation was alarming, because the first ever cases only appeared in 2002, in China, and so the best way to treat this new disease was unknown. Not many drugs are effective against viruses and all doctors can usually do with a viral disease is to treat symptoms like fever and inflammation, and rely on the body's own immune system to fight off the virus. However, in recent years a number of antiviral drugs have been developed (for example, there are several in use against HIV/AIDS) and there was hope that some of them might be active against SARS. Steroids have also been used in SARS treatment to try to reduce the inflammation of the lungs. To find out which, if any, of the potential treatments were effective, a number of research studies were carried out, both during and since the outbreak. The World Health Organization (WHO) established an International SARS Treatment Study Group, which recommended that a 'systematic review' of potential SARS treatments should be carried out. In particular, it was considered important to bring together all the available evidence on the use of certain antiviral drugs (ribavirin, lopinavir and ritonavir), steroids, and proteins called immunoglobulins which are found naturally in human blood. The WHO group wanted to know how these treatments affected the virus outside the body ('in vitro') and whether it helped the condition of patients and reduced the death rate, especially in those patients who developed a dangerous complication called acute respiratory distress syndrome. Researchers conducted a comprehensive search for information from research studies that fitted carefully pre-defined selection criteria. They found 54 SARS treatment studies, 15 in- vitro studies, and three acute respiratory distress syndrome studies. Some of the in-vitro studies with the antiviral drugs found that a particular drug reduced the reproduction rate of the viruses, but most of the studies of these drugs in patients were inconclusive. Of 29 studies on steroid use, 25 were inconclusive and four found that the treatment caused possible harm. From the published studies, it is not possible to say whether any of the treatments used against SARS were effective. It is now many months since any new cases have been reported, but it is possible that the same or a similar virus might cause outbreaks in the future. It is disappointing that none of the research on SARS so far is likely to be useful in helping to decide on the best treatments to use in such an outbreak. The authors examined the weaknesses of the studies they found and urge that more effective methods of research should be applied in any future outbreaks. Their recommendations mean that researchers should be better prepared to learn from potential future outbreaks. Citation: Stockman LJ, Bellamy R, Garner P (2006) SARS: Systematic review of treatment effects. PLoS Med 3(9): e343. PLEASE ADD THE LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.doi.org/10.1371/journal.pmed.0030343 PRESS-ONLY PREVIEW OF THE ARTICLE: http://www.plos.org/press/plme-03-09-stockman.pdf Related image for press use: http://www.plos.org/press/plme-03-09-stockman.jpg Centers for Disease Control and Prevention Division of Viral and Rickettsial Diseases 1600 Clifton Rd. Atlanta, GA 30333 United States of America THE FOLLOWING RESEARCH ARTICLE WILL ALSO BE PUBLISHED ONLINE: Surprising results from major Asian study Diarrhoea remains one of the leading causes of death among children in developing countries. A major study in six countries has found that one microorganism that causes diarrhoea, Shigella, appears to be more common in the poorest communities of Asia than previously thought. Antibiotic-resistant strains were also found to have emerged. Based on their findings, the Korea-based researchers, writing in PLoS Medicine, call for more efforts to reduce the number of cases. Citation: von Seidlein L, Kim DR, Ali M, Lee H, Wang XY, et al. (2006) A multicentre study of Shigella diarrhoea in six Asian countries: Disease burden, clinical manifestations, and microbiology. PLoS Med 3(9): e353. PLEASE ADD THE LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.doi.org/10.1371/journal.pmed.0030353 PRESS-ONLY PREVIEW OF THE ARTICLE: http://www.plos.org/press/plme-03-09-von-seidlein.pdf Lorenz von Seidlein San 4-8 Boncheon-7 dong Seoul, Korea 151-818 +82 2 881 1135 +82 2 872 2803 (fax) Is DOTS plus a cost effective strategies for treating multidrug-resistant tuberculosis? A research paper published this week in the international open access journal PLoS Medicine evaluated the feasibility, effectiveness, cost, and cost-effectiveness of a DOTS-Plus pilot project established at Makati Medical Center in Manila, the Philippines, in 1999. The authors, led by Katherine Floyd of the World Health Organisation, studied 117 patients and showed that in this setting the average cost to the health system was US$3,355, and US$837 to the patients. The authors calculated that the mean cost per disability-adjusted life year (DALY) gained by the DOTS-Plus project was US$242 (range US$85 to US$426). In a related perspective, Paul Garner and colleagues from the Liverpool School of Hygiene and Tropical Medicine assess the implications of these findings further, especially whether they can be extrapolated outside this specific setting. Citation: Tupasi TE, Gupta R, Quelapio MID, Quelapio RB, Mira NR, et al. (2006) Feasibility and cost-effectiveness of treating multidrug-resistant tuberculosis: A cohort study in the Philippines. PLoS Med 3(9): e352. PLEASE ADD THE LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.doi.org/10.1371/journal.pmed.0030352 PRESS-ONLY PREVIEW OF THE ARTICLE: http://www.plos.org/press/plme-03-09-floyd.pdf World Health Organization 20 Avenue Appia Geneva, CH 1211 A novel strategy for treatment of rheumatoid arthritis A research paper published this week in the international open access journal PLoS Medicine describes a possible new experimental therapy for rheumatoid arthritis and other autoimmune inflammatory diseases. The researchers, led by Rikard Holmdahl from Lund University, exploits a new finding in a mouse model that lack of reactive oxygen species makes arthritis worse rather than, as had been expected, better in a mouse model with arthritis. Reactive oxygen species are thought to be important in defense against pathogens. The researcher tested phytol, which increased oxidative burst in vivo, and found that it improved the arthritis in the mouse model with arthritis. In testing, also in mice, phytol showed comparative effectiveness to standard therapies for arthritis such as anti-tumour necrosis factor-á and methotrexate. The authors conclude that these results "suggest a novel pathway of autoimmune inflammatory disease and possibly a novel therapeutic strategy." A Perspective article by Andrew Cope, from Imperial College London, discusses the study's findings further. Citation: Hultqvist M, Olofsson P, Gelderman KA, Holmberg J, Holmdahl R (2006) A new arthritis therapy with oxidative burst inducers. PLoS Med 3(9): e348. PLEASE ADD THE LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.doi.org/10.1371/journal.pmed.0030348 PRESS-ONLY PREVIEW OF THE ARTICLE: http://www.plos.org/press/plme-03-09-hultqvist.pdf Medical Inflammation Research Experimental Medical Science Lund, 221 84 Sweden +46 46 2224607 About PLoS Medicine PLoS Medicine is an open access, freely available international medical journal. It publishes original research that enhances our understanding of human health and disease, together with commentary and analysis of important global health issues. For more information, visit http://www.plosmedicine.org About the Public Library of Science The Public Library of Science (PLoS) is a non-profit organization of scientists and physicians committed to making the world's scientific and medical literature a freely available public resource. For more information, visit http://www.plos.org All works published in PLoS Medicine are open access. Everything is immediately available without cost to anyone, anywhere--to read, download, redistribute, include in databases, and otherwise use--subject only to the condition that the original authorship is properly attributed. Copyright is retained by the authors. The Public Library of Science uses the Creative Commons Attribution License. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:fb7a8d7b-a68d-4b59-9dfd-1170f5b577d7>
seed
WHAT YOU SHOULD KNOW: - A liver abscess is a collection of pus in the liver caused by bacteria, fungi, or parasites. It may occur as a single lesion or as multiple lesions of different sizes. It is commonly caused by an infection with bacteria (germs) or ameba (parasite that causes diarrhea). A bacterial liver abscess often happens after an abdominal (stomach) infection. This may include infections of the bile ducts caused by gallstones, infection in the intestines, or appendicitis. Pain on the right upper part of your abdomen, fever, and night sweats are common signs and symptoms. You may also have loss of appetite, nausea (upset stomach), vomiting (throwing up), or unplanned weight loss. Sometimes, cough, trouble breathing, or yellowing of the skin and whites of the eyes, may be present. - A liver abscess may be diagnosed by blood or imaging tests that take pictures of your abdomen. These may include x-rays, liver scan, ultrasound, computerized tomography (CT) scan, and a magnetic resonance imaging (MRI). Treatment will depend on the cause, size, and location of your liver abscess. It will also depend on whether you have a single or multiple abscesses. Medicines may be given to kill the infection and ease your symptoms. Caregivers may drain the abscess or do surgery to help remove the pus in your liver. With treatment and care, your abscess may be cured and serious problems may be prevented. AFTER YOU LEAVE: Take your medicine as directed. Call your primary healthcare provider if you think your medicine is not helping or if you have side effects. Tell him if you are allergic to any medicine. Keep a list of the medicines, vitamins, and herbs you take. Include the amounts, and when and why you take them. Bring the list or the pill bottles to follow-up visits. Carry your medicine list with you in case of an emergency. Ask for information about where and when to go for follow-up visits: For continuing care, treatments, or home services, ask for more information. Do not drink alcohol: Some people should not drink alcohol. These people include those with certain medical conditions or who take medicine that interacts with alcohol. Alcohol includes beer, wine, and liquor. Tell your caregiver if you drink alcohol. Ask him to help you stop drinking. - Eat a variety of healthy foods: This may help you have more energy and heal faster. Healthy foods include fruit, vegetables, whole-grain breads, low-fat dairy products, beans, lean meat, and fish. Ask if you need to be on a special diet. - Drink liquids as directed: Adults should drink between 9 and 13 eight-ounce cups of liquid every day. Ask what amount is best for you. For most people, good liquids to drink are water, juice, and milk. - Get plenty of exercise: Talk to your caregiver about the best exercise plan for you. Exercise can decrease your blood pressure and improve your health. - Do not smoke: If you smoke, it is never too late to quit. You are more likely to have heart disease, lung disease, cancer, and other health problems if you smoke. Quitting smoking will improve your health and the health of those around you. If you smoke, ask for information about how to stop. - Manage stress: Stress may slow healing and cause illness. Learn new ways to relax, such as deep breathing. For more information: A liver abscess may be a life-changing disease for you and your family. Accepting that you have a liver abscess may be hard. You and those close to you may feel angry, sad, or frightened. These feelings are normal. Talk to your caregivers, family, or friends about your feelings. Contact the following for more information: - American College of Gastroenterology 6400 Goldsboro Rd., Ste 450 Bethesda , MD 20817 Phone: 1- 301 - 263-9000 Web Address: http://www.gi.org - American Liver Foundation 39 Broadway Suite 2700 New York , New York 10006 Phone: 1- 212 - 668-1000 Phone: 1- 800 - 465-4837 Web Address: http://www.liverfoundation.org - National Digestive Diseases Information Clearinghouse (NDDIC) 2 Information Way Bethesda , MD 20892-3570 Phone: 1- 800 - 891-5389 Web Address: www.digestive.niddk.nih.gov CONTACT A CAREGIVER IF: - You have a fever. - You have a cough, red or swollen skin, or feel weak and achy. - Your skin has a rash. - Your wound is tender, swollen, or has pus coming from it. - You have questions or concerns about your liver abscess, medicine, or care. SEEK CARE IMMEDIATELY IF: - You have vomiting or seizures (convulsions). - You have pain in your abdomen (stomach) or it feels fuller than normal. - You have trouble breathing all of a sudden. © 2013 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or Truven Health Analytics. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you. Learn more about Liver Abscess (Discharge Care) Drugs associated with: Micromedex® Care Notes: Related encyclopedia articles: