id
stringlengths
47
47
source
stringclasses
2 values
text
stringlengths
19
659k
<urn:uuid:2a092e3f-e0c2-42f6-8a53-3afb1ad6ae90>
seed
Volume 13, Number 1—January 2007 Death Rates from Malaria Epidemics, Burundi and Ethiopia Death rates exceeded emergency thresholds at 4 sites during epidemics of Plasmodium falciparum malaria in Burundi (2000–2001) and in Ethiopia (2003–2004). Deaths likely from malaria ranged from 1,000 to 8,900, depending on site, and accounted for 52% to 78% of total deaths. Earlier detection of malaria and better case management are needed. Plasmodium falciparum malaria epidemics are poorly documented, partly because they occur in remote, underresourced areas where proper data collection is difficult. Although the public health problems from these epidemics are well recognized (1,2), quantitative evidence of their effect on death rates is scarce (3). Hospital-based death data, when available, provide a grossly incomplete picture because most malaria patients do not seek healthcare and, thus, these cases are not reported (4). Thus, current estimates (2) rely on extrapolations of limited site-specific or empirical observations. Accurate information is needed not only to improve our knowledge of malaria epidemics, but also to assess progress of malaria control initiatives that aim to decrease deaths from malaria worldwide by 50% by 2010 (5). We report community-based death rates from 2 P. falciparum malaria epidemics (Burundi, 2000–2001; Ethiopia, 2003–2004) in which Médecins Sans Frontières intervened. Detailed information about these epidemics, their determinants, and their evolution is provided elsewhere (6). Briefly, the inhabitants of the Kayanza, Karuzi, and Ngozi provinces (population 1,415,900) of Burundi, which borders Rwanda, live in small farming villages, most at an altitude >1,500 m. Before the 2000–2001 epidemic, these areas were considered to have low malaria transmission. Rapid surveys of febrile outpatients confirmed the epidemic (>75% had P. falciparum infections; Médecins Sans Frontières, unpub. data). For all 3 provinces, 1,488,519 malaria cases were reported (attack rate 109.0%). Figure 1 shows the number of cases each month. In Kayanza, 462,454 cases were reported from September 2000 through May 2001 (attack rate 95.9%, average cases/month 51,383) (7); case counts peaked in January. In Karuzi, 625,751 cases were reported from October 2000 through March 2001 (attack rate 202.8%, average cases/month 10,429); case counts peaked in December (7). Ngozi reported 400,314 malaria cases from October 2000 through April 2001 (attack rate 67.7%, average cases/month 57,187); case count peaked in November (7). Damot Gale district (286,600 inhabitants, altitude 1,600–2,100 m), considered a low-transmission area, is located in Wolayita Zone, Southern Nations Nationalities and Peoples Region, central Ethiopia. The malaria epidemic was confirmed locally by a sharp increase in P. falciparum–positive results among children treated in Médecins Sans Frontières feeding centers; the increase started in July 2003 (6). Reported caseload decreased in August and September, probably because of drug shortages and subsequent untreated and unreported patients; caseload rose sharply in October, November, and December (Figure 2). During these 3 months in 2003, 10,308 cases were reported by the 8 district health facilities (attack rate 3.6%, average no. cases/month 3,436), more than 10-fold the corresponding total in 2002 (n = 744) (Médecins Sans Frontières, unpub. data). During the epidemics, a retrospective survey of deaths was conducted at each site. Surveys were approved by local authorities, and respondents gave oral consent. Thirty clusters of 30 households were selected by using 2- or 3-stage sampling (8). Households were defined as groups of persons who slept under the same roof under 1 family head at the time of the survey; occasional visitors were excluded. Selection within each cluster followed a standard rule of proximity (9). Information collected included number, age, and sex of persons living in the household; number of deaths (age, sex, and date of death) since the beginning of the recall period; and cause of death. Malaria was defined as the probable cause if a decedent’s household reported “presence of fever” (Burundi) or “fever and shivering without severe diarrhea or severe respiratory infection” (Ethiopia). Recall periods were defined by easily recognizable starting dates (Table 1). Data were analyzed by using EpiInfo (Centers for Disease Control and Prevention, Atlanta, GA, USA). Death rates were expressed as deaths/10,000 persons/day, and 95% confidence intervals (CIs) were adjusted for design effects. Mortality rates were compared with standard emergency thresholds of 1 death/10,000/day (crude mortality rate [CMR]) and 2 deaths/10,000/day (under 5 mortality rate [U5MR]) (10). Excess number of deaths probably due to malaria was estimated by applying the specific death rates due to self-reported malaria to the population and time period covered by each survey. CMR and U5MR exceeded respective emergency thresholds (Table 1). In the total population, proportion of deaths probably due to malaria varied from 51.7% (Karuzi) to 78.3% (Kayanza) and from 53.0% (Ngozi) to 64.3% (Kayanza) for children <5 years of age (Table 1). Deaths probably due to malaria ranged from 1,000 in Kayanza to 8,900 in Ngozi; >50% were among children <5 years (Table 2). Estimates reflect only portions of the epidemic periods (Table 2). When surveys covered most of the epidemic duration (74% in Ngozi, 85% in Karuzi, 83% in Damot Gale), malaria was the probable cause of death for a comparable proportion of the population (1.5% [8,900/574,400] in Ngozi, 0.9% [2,800/308,400] in Karuzi, and 1.9% [5,400/286,600] in Damot Gale). We provide novel data based on representative population sampling, rather than health facility–based reporting. P. falciparum epidemics seem responsible for high death rates: the estimated number of deaths probably due to malaria at our sites (≈18,000) represents about 10% of the worldwide total estimated annual deaths due to epidemic malaria (2). The limitations of retrospective mortality surveys are well known (11); hence, results should be interpreted with caution. Reporting bias was minimized by defining a limited recall period and by training interviewers extensively. In Kayanza, the survey was conducted before the epidemic peak; the estimated death rate may have been lower than average for the entire epidemic, which may have led to underestimation of the true death rate. Generally, postmortem diagnosis of malaria at the household level is difficult, and even advanced verbal autopsy techniques (not used in these surveys due to lack of skilled human resources) are of limited accuracy (12). Decedents’ next of kin may underreport or overreport certain signs and symptoms. Malaria deaths may thus have been overestimated, particularly in Burundi, where fever was the sole criterion of probable malaria; use of this 1 criterion may have masked other causes, such as acute respiratory infection. Furthermore, in 3 of the areas surveyed (Kayanza excepted), the epidemics occurred concurrently with nutritional crises. Malnutrition as a cause of death could not be assessed because of its implication in various infectious diseases, but high prevalence of malnutrition is usually associated with excess U5MR (13). Nevertheless, mortality rates among persons ≥5 years of age (CMR – [U5MR × proportion of children <5 years in survey sample]/[1 – proportion of children <5 years in survey sample]) were also elevated. Rates ranged from 0.5 in Kayanza to 1.7 in Damot Gale, higher than the expected rate of 0.27 in sub-Saharan Africa (14). In the absence of other specific causes of acute death for adults, we speculate that malaria was largely responsible for these excess deaths. At all sites, early warning systems were not operational and surveillance was ineffective, which led to substantial delays in epidemic detection (6). First-line treatment regimens (chloroquine in Burundi, sulfadoxine/pyrimethamine in Ethiopia) were not very effective. In Damot Gale, access to treatment was poor (data not shown), probably due to the dearth of health facilities. All these factors may have exacerbated the epidemics and contributed to excessive death rates. Early diagnosis and prompt treatment of malaria remain cornerstones of the global malaria control strategy (15). The degree to which these interventions will be made available will largely determine the death rates in future epidemics. Dr Guthmann is a physician and senior epidemiologist who has worked at Epicentre since January 2000. Although his main interest is the epidemiology of malaria, he has also conducted research on other topics such as leishmaniasis and measles. We are grateful to Médecins Sans Frontières personnel at headquarters and field staff who actively contributed to the studies. Each survey was supervised by an Epicentre epidemiologist. The work was done in collaboration with National Ministries of Health, which authorized inspection of records and provided the necessary information when appropriate. All surveys, as well as this review, were financed by Médecins Sans Frontières. - Najera JA. Prevention and control of malaria epidemics.Parassitologia. 1999;41:339–47. - Worrall E, Rietveld A, Delacollette C. The burden of malaria epidemics and cost-effectiveness of interventions in epidemic situations in Africa.Am J Trop Med Hyg. 2004;71(Suppl):136–40. - Snow RW, Craig M, Deichmann U, Marsh K. Estimating mortality, morbidity and disability due to malaria among Africa’s non-pregnant population.Bull World Health Organ. 1999;77:624–40. - Malakooti MA, Biomndo K, Shanks GD. Reemergence of epidemic malaria in the highlands of western Kenya.Emerg Infect Dis. 1998;4:671–6. - Nabarro DN, Tayler EM. The Roll Back Malaria campaign.Science. 1998;280:2067–8. - Checchi F, Cox J, Balkan S, Tamrat A, Priotto G, Alberti KP, Malaria epidemics and interventions, Kenya, Burundi, southern Sudan, and Ethiopia, 1999–2004.Emerg Infect Dis. 2006;12:1477–85. - Legros D, Dantoine F. Epidémie de paludisme du Burundi, Septembre 2000–Mai 2001. Paris: Epicentre; 2001. - Henderson RH, Sundaresan T. Cluster sampling to assess immunisation coverage: a review of experience with simplified sampling method.Bull World Health Organ. 1982;60:253–60. - Grein T, Checchi F, Escriba JM, Tamrat A, Karunakara U, Stokes C, Mortality among displaced former UNITA members and their families in Angola: a retrospective cluster survey.BMJ. 2003;327:650–4. - Salama P, Spiegel P, Talley L, Waldman R. Lessons learned from complex emergencies over past decade.Lancet. 2004;364:1801–13. - Checchi F, Roberts L. Interpreting and using mortality data in humanitarian emergencies: a primer for non-epidemiologists. HPN Network Paper 52. London: Overseas Development Institute; 2005. - Snow RW, Armstrong JR, Forster D, Winstanley MT, Marsh VM, Newton CR, Childhood deaths in Africa: uses and limitations of verbal autopsies.Lancet. 1992;340:351–5. - Standardized Monitoring and Assessment of Relief and Transitions. SMART Methodology, version 1, April 2006 [cited 2006 16 Nov]. Available from http://www.smartindicators.org/ - The Sphere Project. Sphere handbook (revised 2004) [cited 2006 16 Nov]. Available from http://www.sphereproject.org. - World Health Organization. Implementation of the global malaria control strategy. Geneva: The Organization; 1993. WHO Technical Report Series 839. Suggested citation for this article: Guthmann J-P, Bonnet M, Ahoua L, Dantoine F, Balkan S, van Herp M, et al. Death rates from malaria epidemics, Burundi and Ethiopia. Emerg Infect Dis [serial on the Internet]. 2007 Jan [date cited]. Available from http://wwwnc.cdc.gov/eid/article/13/1/06-0546.htm Comments to the Authors Lessons from the History of Quarantine, from Plague to Influenza A
<urn:uuid:0ec6d86a-0aa9-4564-b734-830ccceffd7d>
seed
Volume 14, Number 6—June 2008 Persistence of Yersinia pestis in Soil Under Natural Conditions As part of a fatal human plague case investigation, we showed that the plague bacterium, Yersinia pestis, can survive for at least 24 days in contaminated soil under natural conditions. These results have implications for defining plague foci, persistence, transmission, and bioremediation after a natural or intentional exposure to Y. pestis. Plague is a rare, but highly virulent, zoonotic disease characterized by quiescent and epizootic periods (1). Although the etiologic agent, Yersinia pestis, can be transmitted through direct contact with an infectious source or inhalation of infectious respiratory droplets, flea-borne transmission is the most common mechanism of exposure (1). Most human cases are believed to occur during epizootic periods when highly susceptible hosts die in large numbers and their fleas are forced to parasitize hosts upon which they would not ordinarily feed, including humans (2). Despite over a century of research, we lack a clear understanding of how Y. pestis is able to rapidly disseminate in host populations during epizootics or how it persists during interepizootic periods (2–6). What limits the geographic distribution of the organism is also unclear. For example, why is the plague bacterium endemic west of the 100th meridian in the United States, but not in eastern states despite several known introductions (7)? Persistence of Y. pestis in soil has been suggested as a possible mechanism of interepizootic persistence, epizootic spread, and as a factor defining plague foci (2,3,5,7,8). Although Y. pestis recently evolved from an enteric bacterium, Y. pseudotuberuclosis, that can survive for long periods in soil and water, studies have shown that selection for vector-borne transmission has resulted in the loss of many of these survival mechanisms. This suggests that long-term persistence outside of the host or vector is unlikely (9–11). Previous studies have demonstrated survival of Y. pestis in soil under artificial conditions (2,3,12–14). However, survival of Y. pestis in soil under natural exposure conditions has not been examined in North America. As part of an environmental investigation of a fatal human plague case in Grand Canyon National Park, Arizona, in 2007, we tested the viability of Y. pestis in naturally contaminated soil. The case-patient, a wildlife biologist, was infected through direct contact with a mountain lion carcass, which was subsequently confirmed to be positive for Y. pestis based on direct fluorescent antibody (DFA) testing (which targets the Y. pestis–specific F1 antigen), culture isolation, and lysis with a Y. pestis temperature-specific bacteriophage (15). The animal was wearing a radio collar, and we determined the date of its death (October 26, 2007) on the basis of its lack of movement. The case-patient had recorded the location at which he encountered the carcass and had taken photographs of the remains, which showed a large pool of blood in the soil under the animal’s mouth and nose. During our field investigation, ≈3 weeks after the mountain lion’s death, we used global positioning satellite coordinates and photographs to identify the exact location of the blood-contaminated soil. We collected ≈200 mL of soil from this location at depths of up to ≈15 cm from the surface. After collection, the soil was shipped for analysis to the Bacterial Diseases Branch of the Centers for Disease Control and Prevention in Fort Collins, Colorado. Four soil samples of ≈5 mL each were suspended in a total volume of 20 mL of sterile physiologic saline (0.85% NaCl). Samples were vortexed briefly and allowed to settle for ≈2 min before aliquots of 0.5 mL were drawn into individual syringes and injected subcutaneously into 4 Swiss-Webster strain mice (ACUC Protocol 00–06–018-MUS). Within 12 hours of inoculation, 1 mouse became moribund, and liver and spleen samples were cultured on cefsulodin-Irgasan-novobiocin agar. Colonies consistent with Y. pestis morphology were subcultured on sheep blood agar. A DFA test of this isolate was positive, demonstrating the presence of F1 antigen, which is unique to Y. pestis. The isolate was confirmed as Y. pestis by lysis with a Y. pestis temperature–specific bacteriophage (15). Additionally, the isolate was urease negative. Biotyping (glycerol fermentation and nitrate reduction) of the soil and mountain lion isolates indicated biovar orientalis. Of the 3 remaining mice, 1 became moribund after 7 days and was euthanized; 2 did not become moribund and were euthanized 21 days postexposure. Culture of the necropsied tissues yielded no additional isolates of Y. pestis. Pulsed-field gel electrophoresis (PFGE) typing with AscI was performed with the soil isolate, the isolate recovered from the mountain lion, and the isolate obtained from the case-patient (16). The PFGE patterns were indistinguishable, showing that the Y. pestis in the soil originated through contamination by this animal (Figure). Although direct plating of the soil followed by quantification of CFU would have been useful for assessing the abundance of Y. pestis in the soil, this was not possible because numerous contaminants were present in the soil. It is unclear by what mechanism Y. pestis was able to persist in the soil. Perhaps the infected animal’s blood created a nutrient-enriched environment in which the bacteria could survive. Alternatively, adherence to soil invertebrates may have prolonged bacterial viability (17). The contamination occurred within a protected rock outcrop that had limited exposure to UV light and during late October, when ambient temperatures were low. These microclimatic conditions, which are similar to those of burrows used by epizootic hosts such as prairie dogs, could have contributed to survival of the bacteria. These results are preliminary and do not address 1) the maximum time that plague bacteria can persist in soil under natural conditions, 2) possible mechanisms by which the bacteria are able to persist in the soil, or 3) whether the contaminated soil is infectious to susceptible hosts that might come into contact with the soil. Answers to these questions might shed light on the intriguing, long-standing mysteries of how Y. pestis persists during interepizootic periods and whether soil type could limit its geographic distribution. From a public health or bioterrorism preparedness perspective, answers to these questions are necessary for evidence-based recommendations on bioremediation after natural or intentional contamination of soil by Y. pestis. Previous studies evaluating viability of Y. pestis on manufactured surfaces (e.g., steel, glass) have shown that survival is typically <72 hours (18). Our data emphasize the need to reevaluate the duration of persistence in soil and other natural media. Dr Eisen is a service fellow in the Division of Vector-Borne Infectious Diseases, Centers for Disease Control and Prevention, Fort Collins. Her primary interest is in the ecology of vector-borne diseases. We thank L. Chalcraft, A. Janusz, R. Palarino, S. Urich, and J. Young for technical and logistic support. - Barnes AM. Conference proceedings: surveillance and control of bubonic plague in the United States. Symposium of the Zoological Society of London. 1982;50:237–70. - Gage KL, Kosoy MY. Natural history of plague: perspectives from more than a century of research. Annu Rev Entomol. 2005;50:505–28. - Drancourt M, Houhamdi L, Raoult D. Yersinia pestis as a telluric, human ectoparasite-borne organism. Lancet Infect Dis. 2006;6:234–41. - Eisen RJ, Bearden SW, Wilder AP, Montenieri JA, Antolin MF, Gage KL. Early-phase transmission of Yersinia pestis by unblocked fleas as a mechanism explaining rapidly spreading plague epizootics. Proc Natl Acad Sci U S A. 2006;103:15380–5. - Webb CT, Brooks CP, Gage KL, Antolin MF. Classic flea-borne transmission does not drive plague epizootics in prairie dogs. Proc Natl Acad Sci U S A. 2006;103:6236–41. - Cherchenko II, Dyatlov AI. Broader investigation into the external environment of the specific antigen of the infectious agent in epizootiological observation and study of the structure of natural foci of plague. J Hyg Epidemiol Microbiol Immunol. 1976;20:221–8. - Pollitzer R. Plague. World Health Organization Monograph Series No. 22. Geneva: The Organization; 1954. - Bazanova LP, Maevskii MP, Khabarov AV. An experimental study of the possibility for the preservation of the causative agent of plague in the nest substrate of the long-tailed suslik. Med Parazitol (Mosk). 1997; ( - Achtman M, Zurth K, Morelli G, Torrea G, Guiyoule A, Carniel E. Yersinia pestis, the cause of plague, is a recently emerged clone of Yersinia pseudotuberculosis. Proc Natl Acad Sci U S A. 1999;96:14043–8. - Brubaker RR. Factors promoting acute and chronic diseases caused by yersiniae. Clin Microbiol Rev. 1991;4:309–24. - Perry RD, Fetherston JD. Yersinia pestis—etiologic agent of plague. Clin Microbiol Rev. 1997;10:35–66. - Baltazard M, Karimi Y, Eftekhari M, Chamsa M, Mollaret HH. La conservation interepizootique de la peste en foyer invetere hypotheses de travail. Bull Soc Pathol Exot. 1963;56:1230–41. - Mollaret H. Conservation du bacille de la peste durant 28 mois en terrier artificiel: demonstration experimentale de la conservation interepizootique de las peste dans ses foyers inveteres. CR Acad Sci Paris. 1968;267:972–3. - Mollaret HH. Experimental preservation of plague in soil [in French]. Bull Soc Pathol Exot Filiales. 1963;56:1168–82. - Chu MC. Laboratory manual of plague diagnostics. Geneva: US Centers for Disease Control and Prevention and World Health Organization; 2000. - Centers for Disease Control and Prevention. Imported plague—New York City, 2002. MMWR Morb Mortal Wkly Rep. 2003;52:725–8. - Darby C, Hsu JW, Ghori N, Falkow S. Caenorhabditis elegans: plague bacteria biofilm blocks food intake. Nature. 2002;417:243–4. - Rose LJ, Donlan R, Banerjee SN, Arduino MJ. Survival of Yersinia pestis on environmental surfaces. Appl Environ Microbiol. 2003;69:2166–71. Suggested citation for this article: Eisen RJ, Petersen JM, Higgins MS, Wong D, Levy CE, Mead PS, et al. Persistence of Yersinia pestis in soil under natural conditions. Emerg Infect Dis [serial on the Internet]. 2008 Jun [date cited]. Available from http://wwwnc.cdc.gov/eid/article/14/6/08-0029 Comments to the Authors Comments to the EID Editors Please contact the EID Editors via our Contact Form. - Page created: July 09, 2010 - Page last updated: July 09, 2010 - Page last reviewed: July 09, 2010 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:fc748d0e-4d8d-49db-b3fd-5cf3a227561e>
seed
Volume 6, Number 6—December 2000 International Editor's Update For 20 years after the end of World War II, infectious diseases were endemic throughout Japan, which served during this first postwar phase almost as a museum of communicable diseases. Improvements in socioeconomic conditions, infrastructure (especially water and sewerage systems), and nutrition, brought about a rapid reduction in rates of acute enteric bacterial and parasitic infections. The development and clinical application of antibiotics also contributed to this decrease. During the second postwar period (1965-1985), further advancement in the use of antibiotics led to control of acute enteric bacterial diseases. However, medical advances such as cancer chemotherapy and organ transplantation, along with an increasing elderly population, created a large immunocompromised population and widespread opportunistic infections. The development of new antibiotics was followed by the emergence of pathogens resistant to drugs. Since 1975, chemicals used in agriculture have been reevaluated to exclude toxic substances; however, decreased use of chemicals in agriculture has led to the reappearance or emergence of ticks and the rickettsia they transmit. In the third postwar period (1985-present), increased international travel has led to an increase in imported infectious diseases. Travelers returning from other Asian countries and other continents have become ill with foodborne and insect-borne infections, including shigellosis, cholera, and typhoid fever; several thousand cases are reported each year. In addition, contaminated imported foods have been responsible for sporadic illnesses or small outbreaks. Misuse or overuse of antibiotics has led to the emergence of methicillin-resistant Staphylococcus aureus, penicillin-resistant Streptococcus pneumoniae, fluoroquinolone-resistant Pseudomonas aeruginosa, and vancomycin-resistant enterococci. All hospitals in Japan must now be alert to nosocomial infections caused by these drug-resistant pathogens. The most important public health problems in modern Japan are massive outbreaks of acute enteric bacterial diseases. These outbreaks are caused by foods prepared commercially on a large scale for school lunches and chain stores. Contamination in a single aspect of preparation has resulted in large single-source foodborne outbreaks. More than 20,000 cases of infections caused by vibrios, Staphylococcus, pathogenic Escherichia coli, and Campylobacter have been reported in the past 5 years. Concerning viral diseases, immunization programs against measles, rubella, and mumps have been mounted, in addition to the successful campaign against polio in the mid-1970s. However, except for polio, the coverage rate for individual vaccines is lower than rates in the United States and Europe, and vaccine-preventable viral illnesses remain at unsatisfactory levels. Viral diarrheal enteritis transmitted through foods such as oysters has also been increasing. Trends in infectious diseases have changed rapidly in Japan during the past 50 years. Three reports are included in this issue that update the status of tuberculosis, flavivirus infection, and antibiotic resistance in Japan. Suggested citation: Kurata T. International Editor's Update. Emerg Infect Dis [serial on the Internet]. 2000, Dec [date cited]. Available from http://wwwnc.cdc.gov/eid/article/6/6/00-0601 - Page created: December 17, 2010 - Page last updated: December 17, 2010 - Page last reviewed: December 17, 2010 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:efc8dc1b-c028-4dd0-aa44-9aaf7aa11624>
seed
August 22, 2014 - Ebola is the cause of a viral hemorrhagic fever disease. - Currently, there are no FDA-approved vaccines or drugs to prevent or treat Ebola. - Ebola does not pose a significant risk to the U.S. public. - Treatment: CDC recommends supportive therapy for patients as the primary treatment for Ebola. This includes balancing the patient’s fluids and electrolytes, maintaining their oxygen status and blood pressure and treating them for any complicating infections. - Investigational Products: While there are experimental Ebola vaccines and treatments under development, these investigational products are in the earliest stages of product development and have not yet been fully tested for safety or effectiveness. Small amounts of some of these experimental products have been manufactured for testing. Thus, very few courses of these experimental products are available for clinical use. The FDA hopes that these investigational products will one day serve to improve outcomes for Ebola patients. However, we expect that most, if not all, of the products in development will require administration in a carefully monitored healthcare setting, in addition to supportive care and rigorous infection control. - Fraudulent Products: Unfortunately, during outbreak situations, fraudulent products claiming to prevent, treat or cure a disease almost always appear. The FDA monitors for fraudulent products and false product claims related to the Ebola virus and takes appropriate action to protect consumers. Consumers who have seen these fraudulent products or false claims are encouraged to report them to the FDA. Information from FDA - August 20, 2014 – Responding to Ebola: The View From the FDA – As part of FDA’s expert commentary and interview series, Medscape spoke with FDA Acting Deputy Chief Scientist and Assistant Commissioner for Counterterrorism Policy Luciana Borio, MD, about the issue of compassionate use and FDA efforts to respond to the Ebola outbreak. - August 14, 2014 – FDA statement: FDA is advising consumers to be aware of products sold online claiming to prevent or treat the Ebola virus. Since the outbreak of the Ebola virus in West Africa, the FDA has seen and received consumer complaints about a variety of products claiming to either prevent the Ebola virus or treat the infection. - August 5, 2014 – FDA authorized the use of a diagnostic test developed by the U.S. Department of Defense (DoD) to detect the Ebola Zaire virus in laboratories designated by the DoD to help facilitate effective response to the ongoing Ebola outbreak in West Africa. The test is designed for use in individuals, including DoD personnel and responders, who may be at risk of infection as a result of the outbreak. Specifically, the test is intended for use in individuals with signs and symptoms of infection with Ebola Zaire virus, who are at risk for exposure to the virus or who may have been exposed to the virus. (See also: August 12, 2014 Federal Register notice from HHS: Declaration Regarding Emergency Use of In Vitro Diagnostics for Detection of Ebola Virus) Frequently Requested Links - Ebola Hemorrhagic Fever information from CDC (includes information on the outbreak, symptoms, transmission, prevention, diagnosis, and treatment) - HHS FAQ: Ebola Experimental Treatments and Vaccines (August 8, 2014) - Access to Investigational Drugs Outside of a Clinical Trial (Expanded Access, sometimes called “compassionate use”) - About Emergency Use Authorization - The FDA’s Drug Review Process: Ensuring Drugs Are Safe and Effective - The FDA’s role during situations like this involves sharing information about medical products in development as well as communicating our assessment of product readiness and clarifying regulatory pathways for development. - The FDA works with U.S. government agencies that fund medical product development, international partners and companies to help speed the development of medical products that could potentially be used to mitigate the Ebola outbreak. For example, the FDA is involved in an inter-agency working group led by theAssistant Secretary for Preparedness and Response (ASPR) / Biomedical Advanced Research and Development Authority (BARDA) to facilitate and accelerate development of potential investigation treatments for Ebola. - The FDA also works directly with medical product sponsors to clarify regulatory and data requirements necessary to move products forward in development as quickly as possible. While the FDA cannot comment on the development of specific medical products, it’s important to note that every FDA regulatory decision is based on a risk-benefit assessment of scientific data that includes the context of use for the product and the patient population being studied. - Under the FDA’s Emergency Use Authorization (EUA) mechanism, the agency can enable the use of an unapproved medical product, or the unapproved use of an approved medical product during emergencies, when, among other circumstances, there are no adequate, approved and available alternatives. An EUA is an important mechanism that allows broader access to available medical products. - Under certain circumstances, the FDA can also enable access for individuals to investigational products through mechanisms outside of a clinical trial, such as through an emergency Investigational New Drug (EIND) application under the FDA’s Expanded Access program. In order for an experimental treatment to be administered in the United States, a request must be submitted to and authorized by the FDA. The FDA stands ready to work with companies and investigators treating Ebola patients who are in dire need of treatment to enable access to an experimental product where appropriate. - Unfortunately, during outbreak situations, fraudulent products claiming to prevent, treat or cure a disease almost always appear. The FDA monitors for fraudulent products and false product claims related to the Ebola virus and takes appropriate action to protect consumers. Related: August 14, 2014 statement
<urn:uuid:a8250f48-9ab2-4a93-a264-a0f9a1b417c1>
seed
of executive functioning, processing speed, verbal fluency and verbal memory individuals diagnosed with Depression or Bipolar disorder were found to perform worse than control subjects (Neurology Reviews, April 2010). were physically active and used the Mediterranean diet showed a sixty percent lower risk for Alzheimer’s dementia during a five year period.Consuming a diet high in fruits, vegetables, legumes, cereal and fish was found to have a positive impact on a study of 1,880 elderly people living in Northern Manhattan, New York. The independent benefits of diet and remaining physically active were still present after adjustments for age, gender, ethnicity, genetic risk factors, caloric intake, body mass index, other diseases, smoking, depression, cognitive and social activities.The diet pattern while not fully explaining the better health of individuals who adhere to it, likely has some type of positive impact in combination with other favorable factors, (Neurology Today, September young stroke patients those younger than 45 years is going up significantly according to a population study based upon more than one million people over a 12 year period.The average age of the time of stroke dropped from 71.3 years in 1993 to 1994 to 70.9 years in 1999 to 68.4 in 2005.Over the same time period, the percentage of stroke patients younger than 45 years rose from 4.5 percent in 1993/1994 to 5.5 percent in 1999 and 7.3 percent in 2005.Risk factors were identified.Among those age 20 to 44 years, diabetes and coronary heart disease significantly increased between 1995 and 2005.There were similar although not significant increases in hypertension and high cholesterol (Cerebrovascular & Critical Care, March 2010). In the US, 1.5 to 2.0 million civilians sustained a traumatic brain injury each year.The use of progesterone demonstrated a fifty percent reduction in mortality in patients treated compared to placebo.Functional outcomes were improved and reduced disability seen in patients suffering from moderate traumatic brain injury. The earlier patients received the drug the better the outcome as a means to try to prevent the brain from swelling immediately after the injury and the cascade of injury that occurs after that time.Neurosurgery and Trauma, March anticonvulsant drugs used to treat seizures, bipolar disorder, mania, neuralgia; migraine and neuropathic pain, often used off label have the increased risk of suicidal ideation and behaviors.Risk was higher for younger and older individuals (Neurology Today, June 3, 2010). the decreasing solvent TCE (Trichloroethylene) has been significantly associated with an increased risk of PD.Men exposed to this substance had more than six times the rate of PD than their twins who did not have this exposure.(Neurology Today, June 5 2010). There is a potential role for smoking as an inducing factor in thrombus formation.The median age for stroke presentation for the smoking population was 65.5 and increased to 68 years for ex-smokers and 67.6 for non-smokers.The median age for TIA presentation was 56.7 for smokers, 72.2 for ex-smokers and 69.1 for non-smokers.Ex-smokers had higher rates of hypertension and dyslipidemia than current or non-smokers.(Neurology Review, April 2010). duration when completed on a habitual basis was associated with better performance on intellectual measures of perceptual reasoning and overall IQ.There were no significant associations found for working memory or processing speed IQ factors (Gruber et. al., 2010, Sleep Medicine,11). As a result of CBT patients exhibited significant decreases in the time it took to fall asleep, decreased wake periods after being asleep, decreased number of awakenings and increased sleep efficiency or the amount of time spent sleeping after initial sleep onset.Significant improvements were seen using behavioral treatment for insomnia in a pain population resulting in improvement in pain or lessened pain interfering with daily functioning (Jungquist, et al., 2010, Sleep Medicine, 11). problems were assessed of children between the ages of five to ten years by their parents.Reduced sleep as reported by the parents was found to be predictive of more delinquent behavior and concentration problems in their children.When parents reported that children were awake after initially falling asleep, this was also predictive of more pronounced daytime sleepiness. Greater daytime sleepiness was seen as related to the presence of social problems in the children.Consequently two factors were seen as affecting the daytime behavior of children, the total sleep time as well as the amount of time spent awake after initially falling asleep (Velten-Schurian et.al., 2010, Sleep Medicine, 11).
<urn:uuid:d116c0e7-471c-4f69-bc05-781761f2bcf4>
seed
This view shows enzymes only for those organisms listed below, in the list of taxa known to possess the pathway. If an enzyme name is shown in bold, there is experimental evidence for this enzymatic activity. |Superclasses:||Biosynthesis → Secondary Metabolites Biosynthesis → Phenylpropanoid Derivatives Biosynthesis → Coumarins Biosynthesis| Some taxa known to possess this pathway include : Melilotus albus Expected Taxonomic Range: Magnoliophyta A widespread group of phenolics in plants termed coumarins constitute lactones of phenylpropanoids with a 2H-benzopyran-2-one nucleus [Brown86a] [Seigler98]. At least 1000 natural occurring coumarins, among them about 300 simple coumarins have been found in many families of higher plants [Berenbaum91] with an especially high number of structural variations encountered in the Apiaceae [Seigler98]. The biosynthesis of the simplest member described in this pathway, i.e. coumarin represents both the specific compound and serves as an eponym of the entire compound class. Coumarin belongs to the most common coumarins in plants. The numerous pharmacological and physiological effects of coumarin and its more complex derivatives such as the furanocoumarins and prenylated coumarins have drawn significant interest of researchers across different scientific areas. Coumarins are known to exhibit anti-inflammatory as well as antioxidant activities and often serve as model compounds for synthetic drugs [Fylaktakidou04] [Curini06]. Moreover, extensive research into their pharmacological and therapeutic properties for many years has resulted in the acknowledgment of their therapeutic role in the treatment of cancer [Lacy04]. About This Pathway In contrast to most of the coumarins, which are biosynthesized through 4-coumaric acid and umbelliferone, the formation of coumarin occurs via 2-coumaric acid [Gestetner74]. In general phenylalanine and trans-cinnamic acid are considered the precursor for the coumarin biosynthesis but Stoker [Stoker62] also reported the formation of coumarin from cis-cinnamic acid. Although free coumarin is found in small amounts in plants their β-glucoside(s) is the predominant accumulating compound. The corresponding glucosyltransferase has been partially purified from and characterized in Melilotus albus [Kleinhofs67] [Poulton80]. Interestingly, the formed trans-2-coumarate β-D-glucoside was not accepted as substrate for the subsequent β-glucosidase reaction. The enzyme only catalyzed the cis-isomer, i.e. coumarinic acid β-D-glucoside (also referred to as bound coumarin - [Kosuge61a]) forming coumarinate [Kosuge61]. The way the isomerization occurs is not entirely resolved. While there is strong evidence that the trans-cis isomerization occurs spontaneously by means of UV-light [Kleinhofs66] [Haskins64] the existence of a light-induced isomerase enzyme system has not been ruled out. Stoker [Stoker64] presented evidence for the involvement of an isomerase system in this process and found that plants kept in daylight or in the dark did not significantly differ with regard to the amount of coumarin. The last step of the pathway is the spontaneous lactonization of coumarinate forming coumarin. The typical 'hay' smell of coumarin is only found when plants are injured. It has been established that the glucosylated coumarins accumulate in the vacuole while the β-glucosidase is located to the extraplasmatic space [Oba81]. Hence, the physical contact of the enzyme and its substrate (coumarin glucosides) only occurs after the breakup of the cell and its organelles. Coumarin itself is not a dead end product but is rather readily further metabolized [Kosuge59]. Brown86a: Brown SA (1986). "Biochemistry of plant coumarins." In: Recent advances in phytochemistry, Volume 20: The shikimic acid pathway. Conn EE (ed.), Plenum Press New York and London, 1986, 287-316. Fylaktakidou04: Fylaktakidou KC, Hadjipavlou-Litina DJ, Litinas KE, Nicolaides DN (2004). "Natural and synthetic coumarin derivatives with anti-inflammatory/ antioxidant activities." Curr Pharm Des 10(30);3813-33. PMID: 15579073 Haskins64: Haskins FA, Williams LG, Gorz HJ (1964). "Light-Induced Trans to Cis Conversion of beta-d-Glucosyl o-Hydroxycinnamic Acid in Melilotus alba Leaves." Plant Physiol 39(5);777-781. PMID: 16656000 Kosuge61: Kosuge T, Conn EE (1961). "The metabolism of aromatic compounds in higher plants. III. The beta-glucosides of o-coumaric, coumarinic, and melilotic acids." J Biol Chem 236;1617-21. PMID: 13753452 Lacy04: Lacy A, O'Kennedy R (2004). "Studies on coumarins and coumarin-related compounds to determine their therapeutic role in the treatment of cancer." Curr Pharm Des 10(30);3797-811. PMID: 15579072 Oba81: Oba K, Conn EE, Canut H, Boudet AM (1981). "Subcellular Localization of 2-(beta-d-Glucosyloxy)-Cinnamic Acids and the Related beta-glucosidase in Leaves of Melilotus alba Desr." Plant Physiol 68(6);1359-1363. PMID: 16662108 FraissinetTache98: Fraissinet-Tachet L, Baltz R, Chong J, Kauffmann S, Fritig B, Saindrenan P (1998). "Two tobacco genes induced by infection, elicitor and salicylic acid encode glucosyltransferases acting on phenylpropanoids and benzoic acid derivatives, including salicylic acid." FEBS Lett 437(3);319-23. PMID: 9824316 ©2014 SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025-3493
<urn:uuid:70268cda-32b6-4f85-9117-9fac4853ea3c>
seed
Controversy still exists over the exact contributions of the allantois and the ventral cloaca to urachal formation. Four anatomic urachal variants have been described, depending on the degree of urachal tubularization and the status of associated umbilical vessels. 1. Patent Urachus Failure of complete urachal lumen closure results in free communication between the bladder and the urachus, and urine leaks from the umbilicus. Lower urinary tract obstruction may also be a contributing factor. The diagnosis can be made at birth or soon thereafter, when the umbilical cord is ligated and urine drains from the umbilicus. A tumor-like protrusion from the umbilicus is frequently seen, and occasionally an umbilical hernia may also be present. A confirmation of the diagnosis can be obtained by analyzing the fluid for urea and creatinine or injecting methylene blue via a catheter into the bladder. Conversely, indigo carmine can be injected into the fistulous tract to look for a color change in the urine. A voiding cystourethrogram is important to rule out any lower tract obstruction, and it may also demonstrate the communication. Early treatment is recommended because umbilical excoriation, recurrent urinary infection, septicemia and stone formation may develop. Neither cauterization of the umbilical lumen alone nor simple ligation has yielded satisfactory results. Complete excision of the urachus and umbilicus with a cuff of bladder by an extraperitoneal approach is a standard method of treatment. In addition, if there is any lower urinary tract obstruction, this also requires treatment. 2. Urachal Cyst When the urachal lumen incompletely obliterates, there exists the potential for cystic development within this epithelial lined space. Most cysts develop in the lower third of the urachus. The cyst generally remains small and silent. Occasionally, it is felt as a midline lower abdominal mass. More often, symptoms are related to size or secondary infection. Septic cysts most commonly present in adults, but some have been reported in infants. These cysts then produce localized pain and sometimes inflammation in association with systemic symptoms. If left untreated, the abscess will often drain from the umbilicus or into the bladder. The differential diagnosis of a palpable uninfected cyst includes a bladder diverticulum, umbilical hernia and ovarian cyst. An infected cyst may be difficult to differentiate from acute appendicitis. Lower abdominal ultrasound and especially CT scans are excellent methods of confirming the diagnosis. An infected cyst is best treated by incision and drainage with subsequent excision. 3. Urachal Sinus A urachal sinus is probably the sequela of a small urachal cyst that became infected and dissected to the umbilicus. Rarely, it may drain into the bladder; the cyst position probably dictates the primary direction of drainage. The symptoms and treatment are similar to the other urachal anomalies already described. The diagnosis of a draining urachal sinus may be difficult to differentiate from an umbilical granuloma or umbilical sinus. A fistulogram may be helpful. 4. Vesicourachal Diverticulum Complete obliteration of the urachus at the umbilicus and incomplete closure at the bladder level may result in a vesicourachal diverticulum. Lower urinary tract obstruction may or may not be a related factor. This problem is usually discovered during radiologic evaluation for a urinary tract infection via a VCUG. Occasionally, stones have been detected within the diverticulum. RELATED UMBILICAL DISORDERS These result from incomplete closure of the omphalomesenteric duct. 1. Omphalomesenteric duct. This is extremely rare and may be recognized with fecal drainage noted from the umbilicus. It is more common in boys than in girls, and differentiation from urachal anomalies is important for the surgical approach. Confirmation is done through a fistulogram. 2. Partially patent omphalomesenteric duct. A. Omphalomesenteric duct sinus. B. Omphalomesenteric duct cyst. This can be diagnosed with fistulograms and require excision. 3. Meckel’s diverticulum. Persistence of the proximal portion of the omphalomesenteric duct as a diverticulum opening into the ileum is called a Meckel’s diverticulum. It may be associated with an umbilical polyp. 4. Umbilical polyp. Persistence of intestinal mucosa at the umbilicus can develop into an umbilical polyp. Probing and possibly a fistulogram are important. A simple polyp can be treated superficially with silver nitrate or local excision. It is important, however, to make sure that it is not associated with a duct remnant. Failure of the intestines to recede into the abdominal cavity by the end of the tenth week of gestation results in an omphalocele. About 50% of infants with an omphalocele have other congenital anomalies. 6. Umbilical hernia. This is usually congenital and relates to the incomplete closure of the anterior abdominal wall fascia after the intestines have returned to the abdominal cavity. Inflammation of the umbilicus B. Single umbilical artery Controversy still exists over the single umbilical artery as a barometer of other congenital anomalies. Certainly, the incidence of urinary tract abnormalities is not significantly increased in newborns with a single umbilical artery.
<urn:uuid:23fe57f7-0901-4205-9c4a-5b03f4d96cfe>
seed
PUTTING A FACE ON A CLASS OF VIRAL DEUBIQUITINATING ENZYMES March 12th, 2007 (l to r) Authors Wilhelm Weihofen, Rachelle Gaudet and Christian Schlieker Herpesviruses (members of the Herpesviridae family) are widespread pathogens, causing disease in humans and animals. The family name stems from the Greek herpein ("to creep"), referring to the latent, re-occurring infections caused by herpesviruses. Although half of us are infected by herpes simplex virus alone, the lucky majority will never experience any symptoms. During acute infection, viral pathogens commandeer host cells for their propagation. Accordingly, they have evolved to disable or subvert to their own advantage the cellular enzymatic machinery that could otherwise be deployed against them to mount antiviral immune response. For example, several laboratories including the Ploegh lab (Whitehead Institute at MIT and affiliated with MCB) have recently discovered that many viruses feature proteins that subvert the host’s ubiquitin system, which controls protein fate by means of mono- and poly-ubiquitination. While poly-ubiquitination is most commonly employed to target a protein for degradation by the proteasome, mono-ubiquitinated proteins are very often bound by proteins containing ubiquitin-binding domains to initiate cell signaling. Deubiquitination, on the other hand, can be used to revert both processes. Ubiquitination and deubiquitination are tightly controlled by a collection of target-specific host proteins. In this study, members of the Gaudet and Ploegh labs teamed up and showed that some herpesvirus-encoded cysteine proteases are not as picky as cellular deubiquitinating enzymes, since they indiscriminately cleave most ubiquitin molecules attached to host proteins (C. Schlieker, W. Weihofen, E. Frijns, L. Kattenhorn, R. Gaudet and H. Ploegh. Structure of a herpesvirus-encoded cysteine protease reveals a new class of deubiquitinating enzymes. Mol. Cell 2007). To reveal how these enzymes recognize and cleave ubiquitin from proteins, the murine cytomegalovirus cysteine protease was crystallized in complex with a ubiquitin-based suicide inhibitor, and the structure of the complex was determined by x-ray crystallography. The structure of the protease features a unique fold and mode of ubiquitin recognition when compared to known cellular deubiquitinating enzymes. The observed differences and the fact that the deubiquitinating activity of this protein is essential for the virus to sustain a productive infection could lead to the development of drugs targeted against herpesviruses. Furthermore, because this enzyme is specific for ubiquitinated substrates while so unspecific for the nature of the substrate, it might become a useful lab tool as a "ubiquitin razor".
<urn:uuid:f81e69c8-9d0f-42b8-b017-8b57a6f7b487>
seed
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) Afterhyperpolarization, or AHP, describes the hyperpolarizing phase of a neuron's action potential where the cell's membrane potential falls below the normal resting potential. This is also commonly referred to as an action potential's undershoot phase. AHPs have been segregated into "fast", "medium", and "slow" components that appear to have distinct ionic mechanisms and durations. While fast and medium AHPs can be generated by single action potentials, slow AHPs generally develop only during trains of multiple action potentials. During single action potentials, transient depolarization of the membrane opens more voltage-gated K+ channels than are open in the resting state, many of which do not close immediately when the membrane returns to its normal resting voltage. This can lead to an "undershoot" of the membrane potential to values that are more polarized ("hyperpolarized") than was the original resting membrane potential. Ca2+-activated K+ channels that open in response to the influx of Ca2+ during the action potential carry much of the K+ current as the membrane potential becomes more negative. The K+ permeability of the membrane is transiently unusually high, driving the membrane voltage VM even closer to the K+ equilibrium voltage EK. Hence, hyperpolarization persists until the membrane K+ permeability returns to its usual value. Medium and slow AHP currents also occur in neurons. The ionic mechanisms underlying medium and slow AHPs are not yet well understood, but may also involve M current and HCN channels for medium AHPs, and ion-dependent currents and/or ionic pumps for slow AHPs. - ↑ Purves et al., p. 37; Bullock, Orkand, and Grinnell, p. 152. - ↑ M. Shah, and D. G. Haylett. Ca2+ Channels Involved in the Generation of the Slow Afterhyperpolarization in Cultured Rat Hippocampal Pyramidal Neurons. J Neurophysiol 83: 2554-2561, 2000. - ↑ N. Gu, K. Vervaeke, H. Hu, and J.F. Storm, Kv7/KCNQ/M and HCN/h, but not KCa2/SK channels, contribute to the somatic medium afterhyperpolarization and excitability control in CA1 hippocampal pyramidal cells, Journal of Physiology 566:689-715 (2005). - ↑ R. Andrade, R.C. Foehring, and A.V. Tzingounis, Essential role for phosphatidylinositol 4,5-bisphosphate in the expression, regulation, and gating of the slow afterhyperpolarization current in the cerebral cortex, Frontiers in Cellular Neuroscience 6:47 (2012). - ↑ J.H. Kim, I. Sizov, M. Dobretsov, and H. Von Gersdorff, Presynaptic Ca2+ buffers control the strength of a fast post-tetanic hyperpolarization mediated by the a3 Na+/K+-ATPase, Nature Neuroscience 10:196-205 (2007). |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:c6f16d16-d16e-4711-877d-9d3cbcf9690b>
seed
Tesmer Lab Research Summary G protein-coupled receptors (GPCRs) are responsible for the sensations of sight and smell, for regulation of blood pressure and heart rate, and for many other cellular events. Signals impinging upon the exterior of the cell induce a conformational change in these GPCRs that allows them to activate heterotrimeric G proteins within the cell. The activated G proteins then bind to various effectors that initiate downstream cascades, leading to profound physiological change. We study the molecular basis of GPCR-mediated signal transduction, principally via the technique of X-ray crystallography. By determining atomic structures of signaling proteins alone and in complex with their various targets, we can provide important insights into the molecular basis of signal transduction and the disease states that emerge as a result of dysfunctional regulation of these pathways. G protein-coupled receptor kinases (GRKs) G protein-coupled receptor kinases (GRKs) are responsible for homologous desensitization of GPCRs, an adaptive process by which activated receptors are rapidly uncoupled from G proteins. The best-characterized member of this family is GRK2, also known as b-adrenergic receptor kinase 1. GRK2 is not only important for myocardiogenesis and regulation of heart contractility but also implicated in the progression of congestive heart failure. In 2003, we reported the atomic structure of GRK2 in a peripheral membrane complex with the heterotrimeric G protein Gbg. This was the first structure of a GRK and the first of Gbg in complex with a downstream effector enzyme. Subsequently, we have described the structure of GRK2 alone and the Gaq-GRK2-Gbg complex (2005). The latter structure was the first of Gaq and a Gaq-effector complex. Gaq is a heterotrimeric G protein involved in smooth muscle function, in regulation of blood pressure and in maladaptive cardiac hypertrophy. The Gaq-GRK2-Gbg structure also revealed the first glimpse of how activated heterotrimeric G proteins can be arranged at the membrane during active signal transduction and how Ga and Gbg subunits can simultaneously interact with a single effector target. We are also interested in the molecular and biochemical differences between different classes of GRKs. The seven GRKs found in the human genome are classified into three families: GRK2/3, which are ubiquitously expressed; GRK1/7, which play specific roles in phototransduction; and GRK4/5/6, which are all ubiquitously expressed except for GRK4. A distinguishing feature of these families is the structure of their C-terminal domains. We have determined the atomic structure of GRK6 in complex with AMPPNP, a non-hydrolyzable nucleotide analog, as a representative of the GRK4/5/6 family. GRK6 is involved in motor neuron function and thus is a potential drug target for the treatment of Parkinson’s disease. To examine the GRK1/7 family, we have determined the structure of GRK1 in complex with ADP and ATP, as well as in its apo form. GRK1, also known as rhodopsin kinase, regulates the amplitude of the light response in rod cells. One important result from these studies has been to provide what so far has been elusive with GRK2: models of a GRK in different ligand states and the resolution of structural elements believed to be involved in binding GPCRs. The most well established physiological targets of GRKs are activated GPCRs. GRKs are unique among protein kinases for their ability to recognize only the active form of the receptor. Thus, we believe that GRKs can be used to trap the activated state of a GPCR. Understanding the structure of a GPCR in its activated state is one of the holy grails of modern pharmacology. Over the course of our studies, we have developed a toolbox of different GRKs that we can produce in abundance and use to probe the molecular determinants of GRK-receptor interaction. Specifically, we are studying how GRK2 interacts with the squid photoreceptor, a Gaq-coupled receptor, and GRK1 with its physiological target rhodopsin. While a crystal structure of these complexes is the most important goal, we are also defining the GPCR binding sites on GRKs with site directed mutagenesis and biochemical assays, cross-linking studies, and co-crystal structures of the intracellular loops of the receptor with GRKs. These studies will further help us define the molecular architecture of signaling complexes that assemble around activated GPCRs. Because of the therapeutic potential of inhibiting GRK function, we are also investigating the structure of GRK in complex with various inhibitors. For example, we are currently solving the structure of an RNA aptamer that binds to GRK2 with 10-100 nM affinity. High-resolution models of this complex and other inhibited GRKs would facilitate the design of new molecular tools and therapeutic leads for the treatment of cardiovascular disease. Heterotrimeric G Protein Regulated Rho Guanine Nucleotide Exchange Factors (RhoGEFs) GPCRs are also known to be involved in cell transformation, cancer progression and metastasis. One pathway by which this occurs is through the activation of RhoA, a key regulator of cytoskeletal structure and gene transcription. Recently, two families of enzymes responsible for linking these GPCRs to RhoA have been identified. The first family is activated by the G protein Ga12/13 and is critical for platelet activation during wound repair. Of this group, our lab has been studying leukemia-associated RhoGEF (LARG), one of the few RhoGEFs known to be directly responsible for a human cancer. We have determined structures of the catalytic DH/PH domains of LARG alone and in complex with its substrate RhoA, and are in the process of analyzing LARG function using site-directed mutants and either fluorescence-polarization or FRET-based nucleotide exchange assays. We have also determined atomic structures of activated Ga12 and deactivated Ga13 subunits. Future goals are to determine atomic structures of larger fragments of LARG, their complexes with either Ga12 or Ga13, and thereby elucidate the mechanism by which LARG mediates signal transduction from Ga13 to RhoA. RhoA is also activated by Gaq-coupled receptors via a second family of enzymes represented by p63RhoGEF. We recently published the structure of the Gaq-p63RhoGEF-RhoA complex, capturing a snapshot of three nodes of a signal transduction cascade connecting heterotrimeric to small molecular G proteins. Together with the Wieland lab (U. of Heidelberg) and the Miller Lab (Oklahoma Medical Research Foundation), we showed that this pathway is conserved from nematode to humans and that there exists in humans a family of RhoGEFs related to p63RhoGEF that respond to hormones impinging on Gaq-coupled receptors. This family is expected to be at least partly responsible for maladaptive events that occur during heart disease such as cardiac hypertrophy. Current research efforts in the lab are to understand the mechanism by which Gaq activates p63RhoGEF using site-directed mutagenesis and cell-based assays.
<urn:uuid:791cbe2a-2bad-4c9d-9d39-b3a292999151>
seed
The involvement of Propionibacterium acnes in the pathogenesis of acne is controversial, mainly owing to its dominance as an inhabitant of healthy skin. This study tested the hypothesis that specific evolutionary lineages of the species are associated with acne while others are compatible with health. Phylogenetic reconstruction based on nine housekeeping genes was performed on 210 isolates of P. acnes from well-characterized patients with acne, various opportunistic infections, and from healthy carriers. Although evidence of recombination was observed, the results showed a basically clonal population structure correlated with allelic variation in the virulence genes tly and camp5, with pulsed field gel electrophoresis (PFGE)- and biotype, and with expressed putative virulence factors. An unexpected geographically and temporal widespread dissemination of some clones was demonstrated. The population comprised three major divisions, one of which, including an epidemic clone, was strongly associated with moderate to severe acne while others were associated with health and opportunistic infections. This dichotomy correlated with previously observed differences in in vitro inflammation-inducing properties. Comparison of five genomes representing acne- and health-associated clones revealed multiple both cluster- and strain-specific genes that suggest major differences in ecological preferences and redefines the spectrum of disease-associated virulence factors. The results of the study indicate that particular clones of P. acnes play an etiologic role in acne while others are associated with health. Everybody knows acne and 80% of us are or were affected by this disease, with more or less severe consequences for our well-being.MAKE A DONATION We and other research teams have compared the skin microbiota (i.e. the microorganisms colonizing human skin) of healthy and acne-affected skin and found that certain types of the predominant bacterium Propionibacterium acnes are acne-associated whereas other types are associated with healthy skin. This important finding will be exploited in our proposed project to treat and cure acne. see: scientific strategy The results of the study indicate that particular types of P. acnes play an etiologic role in acne while others are associated with health. Be your own researcher! Since acne is such a common disease we invite our funders to actively take part in our research efforts. Many people have suffered from acne in their adolescence, and some might have developed ideas for anti-acne treatment. This "crowdscience" project invites individuals to step forward to share their ideas. We will award the best ideas: 3 project ideas will be tested in our laboratories with the appropriate technology and our know-how, including a skin cell culture model and tests on modulating effects on the skin microbiota.
<urn:uuid:461c461f-c56b-4800-a826-84a9cbed2af6>
seed
Digital tomosynthesis creates a three-dimensional picture of the breast using x-rays. Several low-dose images from different angles around the breast are used to create the final 3-D picture. A mammogram creates a two-dimensional image of the breast from two x-ray images of each breast. Digital tomosynthesis is approved by the U.S. Food and Drug Administration, but is not yet considered the standard of care for breast cancer screening. Because it is relatively new, it is available at a limited number of hospitals. A study has found that when radiologists looked at digital tomosynthesis images along with digital mammogram images, they were more accurate and had lower false positive recall rates compared to radiologists who looked only at digital mammograms. A false positive is an abnormal area that looks like cancer on a mammogram, but turns out to be normal. Besides worrying about being diagnosed with breast cancer, a false positive means more tests and follow-up visits, which can be stressful. The research was published online on Nov. 20, 2012 by Radiology. Read the abstract of “Assessing Radiologist Performance Using Combined Digital Mammography and Breast Tomosynthesis Compared with Digital Mammography Alone: Results of a Multicenter, Multireader Trial.” The research was made up of two studies. In the first study, 12 radiologists looked at breast images from 312 women. In the second study, 15 radiologists looked at breast images from 310 women. All the radiologists had more accurate diagnoses when they looked at both digital mammograms and digital tomosynthesis compared to looking only at digital mammograms: - radiologists were about 11% more accurate in correctly identifying any cancer in the breast in study one - radiologists were about 16% more accurate in correctly identifying any cancer in the breast in study two Adding digital tomosynthesis to digital mammograms also reduced the number of false positives found by all the radiologists: - false positive recall rates dropped by nearly 39% in study one and by about 17% in study two While the results of this small study are very promising, more research needs to be done before digital tomosynthesis becomes part of routine breast cancer screening. Because it is another imaging test, digital tomosynthesis exposes women to additional radiation. Researchers are looking at ways to replace a standard mammogram image with one created from digital tomosynthesis images to reduce radiation exposure. Visit the Breastcancer.org Digital Tomosynthesis page to learn more about how the test is done and how it’s different from a mammogram.
<urn:uuid:eb1f8fcb-1e0f-4122-84c0-df25b70a35b2>
seed
Am Fam Physician. 1999 Apr 15;59(8):2331-2332. Head trauma in children results in 600,000 emergency department visits and 95,000 hospital admissions per year. It is likely that many more such children are evaluated in physicians' offices. Predicting which children require diagnostic imaging can be difficult, and no established guidelines are in place to direct physicians who care for pediatric patients with head trauma. Published guidelines are based on limited clinical data and are not followed uniformly in practice; in addition, they generally do not specify which imaging technique is preferred. Gruskin and Schutzman performed a retrospective study to determine the incidence of skull fracture and intracranial injury in children who presented to a pediatric emergency department. They also attempted to determine which historic features and physical findings predict complications of head injury and whether clinical criteria could aid in the selection of diagnostic imaging. Medical records were reviewed for children younger than two years of age who were discharged from a Boston children's hospital with a diagnosis of head injury, skull fracture, intracranial injury, cerebral contusion or cerebral edema. Excluded from the study were children with a history of seizures, blood dyscrasias, neurologic disorders, ventricular shunts or suspected abuse. Historic information included the estimated height of the fall, level of consciousness and presence of scalp abnormalities, and whether the child was referred by another physician or came directly to the emergency department. When skull radiographs or cranial computed tomographic (CT) scans were obtained, these results were also noted. Children were diagnosed with a “minor head injury” if they had a normal neurologic examination and were alert at discharge, and if radiologic studies were normal. A total of 291 patients were evaluated; medical records were available for 278 patients (96 percent). Most of these children had gone directly to the emergency department. Approximately 60 percent of the children were younger than 12 months of age; 40 percent were between 13 and 24 months of age. Eighty-two percent of all children were ultimately given a diagnosis of minor head injury, and 18 percent were diagnosed with a skull fracture or an intracranial injury. However, the incidence of skull fracture/intracranial injury was 29 percent in children younger than 12 months of age and 4 percent in those older than 12 months of age. An increase in the height of the fall was associated with a higher incidence of serious injury, although low height of fall did not rule out a diagnosis of skull fracture or intracranial injury. The incidence of seizure, emesis, behavior changes and loss of consciousness did not differ significantly between children found to have a minor head injury and children diagnosed with skull fracture/intracranial injury. Of those determined to have a minor head injury, 29 percent exhibited a behavior change, and 11 percent had emesis. There was a very high incidence (94 percent) of skull fracture/intracranial injury associated with scalp abnormality. Depressed level of consciousness and presence of skull fracture/intracranial injury were significantly correlated, but 92 percent of children with an isolated skull fracture and 75 percent with intracranial injury had a normal level of consciousness and a nonfocal neurologic examination. The authors conclude from their data that the incidence of serious head injury from a fall is greatest in children younger than 12 months of age. Many children who apparently have minor falls may have sustained significant head injury, even in the absence of clinical signs and symptoms. Physicians should have a low threshold for ordering imaging studies in children who have fallen. A CT scan could appropriately be ordered, but a skull radiograph may be acceptable in some situations because it is easier to perform and does not require sedation. The authors report that children who fall 3 ft or less and have normal results on scalp examination and no history of neurologic symptoms do not need radiologic evaluation. Gruskin KD, Schutzman SA. Head trauma in children younger than 2 years. Are there predictors for complications? Arch Pediatr Adolesc Med. January 1999;153:15–20. editor's note: This study seems to dispel the notion held by many physicians that if a child does not lose consciousness, the chance of a serious head injury is very small. In addition, many children with minor head injuries may exhibit behavior changes and vomiting. It appears that exercising clinical judgment and maintaining close follow-up is still prudent. In addition, there should be a very low threshold for ordering a cranial CT scan in a child under one year of age who has fallen. Obviously, more studies are needed to better define historic and clinical criteria for diagnostic imaging.—j.t.k. Copyright © 1999 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact [email protected] for copyright questions and/or permission requests. Want to use this article elsewhere? Get Permissions
<urn:uuid:1b5ac462-b660-4ed0-8b4a-71d813f74b46>
seed
What is the Apgar score and why is it done? What if my baby’s skin is slightly yellow? Find out what to expect in those early hours and days after your baby is born. From the moment your newborn’s head comes out of the birth canal, your medical team will evaluate and care for your child. You may not notice much of the care your baby receives. But it is vital to ensure your baby's safe move to the outside world. After the baby is delivered, bulb suction is used to clear mucus from your baby’s airway. As soon as it is clear, you will hear your baby's first cry. Shortly after, the umbilical cord is clamped and cut. If your baby is healthy, your partner can cut the cord, if desired. The baby is then dried and placed on your tummy for a greeting. A blanket may be used to keep the baby warm. Maintaining body temperature is important for both you and the baby. A clean up and evaluation will be done. First is a visual check for any deformities. Next is the Apgar score. This is a measure of the baby’s condition based on color, heart rate, respiration, reflex responses, and muscle tone. A score of 0, 1, or 2 is given for each of the five criteria. The criteria are explained in the table below. A score is given at one minute after birth and again five minutes after birth. A sick baby may be evaluated again at 10 minutes after birth. A total score of 7-10 is normal; 4-6 is intermediate; and 0-3 is low. The evaluation continues with an estimation of gestational age. Babies younger than 37 weeks, older than 42 weeks, or with a weight inappropriate for their age may need special care. Ten minutes after birth, some babies will have a tube passed through their nose and into their stomach. Babies who need this exam include babies who are born: To help protect young eyes, the baby receives eye drops or an antibiotic ointment. Your baby will also be given an injection of vitamin K. A deficiency of vitamin K can cause hemorrhagic disease of the newborn. This is a serious disease of excess bleeding. The umbilical cord is treated with a solution to prevent infection. The baby is carefully swaddled. A knit hat is placed on his or her head to maintain body temperature. If the baby’s temperature drops below 96 degrees Farenheit (35.5 degrees Celsius), he or she will be placed in an infant warmer. The baby will be returned to you for cuddling as soon as possible. If you plan to breastfeed, you are encouraged to start now. While you feed, take care to keep both you and your baby warm. Your partner is encouraged to join in the baby cuddling. Monitoring and Evaluation After delivery, you can send your baby to the nursery so you can sleep. Or, you can keep the baby in a bassinet in your room. About every eight hours, your postpartum nurse will check your baby’s vital signs. These are temperature, heart rate, and breathing rate. When your baby has fed at least once and has normal vital signs, she will be given a bath. A mild soap is used that will not remove all of the baby’s natural antibacterial protection. The baby gets this protection from the whitish greasy material (vernix) that covers most of his or her body. Within twelve hours, your baby will have a full exam from the hospital doctor or your pediatrician. This includes measurement of weight, length, and head circumference. The major organs, such as heart, lungs, skin, and others, are closely examined. Screening tests are done on healthy babies to identify health issues before any symptoms start. Newborn Screening Tests Newborn screening tests check for diseases that can appear early in life. These diseases are not common, but they can cause serious damage if they are not treated. For these tests, blood is drawn from the baby’s heel within the first 24 hours of life. Your state’s health department decides which diseases are screened in your state. All states screen for hypothyroidism and phenylketonuria (PKU). Both of these conditions can cause intellectual disability if they are not treated. Many states also test for the following: If your baby tests positive, you will be notified. A second test will be done to be sure it is not a false positive. If your baby tests negative, you will not be notified. Your doctor will receive a copy of the results either way. Some hospitals will check your baby’s hearing. This test is painless and can be done while your baby is sleeping. It takes only a few minutes. You will have immediate results. If your baby passes the test, there is no hearing problem at this time. If your baby does not pass, further testing is advised. Oxygen Saturation Screening Oxygen saturation refers to the amount of oxygen in your baby’s blood. The oxygen saturation level is a measure of how well your baby’s heart and lungs are working. A tiny red light is attached to the outside of your baby’s hand, foot, or wrist. A cord attaches this light to a machine that records the amount of oxygen being carried by blood cells. The measure will be done at least three times. Ideally, the level will be greater than 94%. If the level is 94% or lower, the doctor will order further tests, such as blood pressure, electrocardiogram, chest x-ray, or echocardiogram. You will also be referred to a pediatric cardiologist. Some babies have a slight yellow tinge to their skin and eyeballs. This is a sign of jaundice. Jaundice is an excess of bilirubin in the blood. Bilirubin is a pigment that is normally cleared from the blood by the liver. A newborn’s liver is still learning how to remove bilirubin. Many babies may appear jaundiced around the second to fifth day of life. Babies who are breastfed may develop jaundice if they do not get enough milk. This condition usually clears within two weeks without treatment. Moms are encouraged to feed often so that the baby will have more bowel movements. Bilirubin leaves the body in the stool. If treatment is necessary, the baby is placed under artificial light. The light breaks down bilirubin in the baby’s skin. In rare cases, prolonged jaundice may be a sign of something more serious. If your baby is a boy, you may like to have him circumcised. This can be done after he has urinated at least one time and is feeding well. The baby is given local anesthesia and the procedure is quick. Take advantage of your time in the hospital. The nurses can help you with feeding, diaper changing, bathing, and other caretaking duties. They can answer any questions and provide support. Most moms are discharged from the hospital two to three days after giving birth. After you are home, the medical support does not end. Call your pediatrician’s office or the maternity ward if you have any questions. You’ll bring your baby to her pediatrician for her one-week appointment. This is usually called a “well-baby checkup.” You will have these checkups regularly during the first year. It is normal for your baby to lose weight. Most newborns lose 5%-7% of their birth weight within the first few days of life. Breastfed babies gain this back by two weeks of life. Formula fed babies often regain their weight sooner. You will need to tend to your baby’s umbilical cord. Each time you change a diaper, examine the cord for signs of redness or drainage. These could signal an infection. Once a day, apply 70% alcohol to the cord. The alcohol helps dry up the cord and reduces the risk of infection. Caring for Your Baby Some parents are a bit overwhelmed in those first days or weeks home from the hospital. Try to stay calm, trust your instincts, and ask for help when you need it. There are many guidelines for how to care for your baby, but it is not an exact science. As long as you provide your baby with warmth, love, food, and cleanliness, you’re doing your job. With time and patience, you and your baby will figure each other out. Remember to enjoy this time. Despite those nights that seem unending, these early weeks will go by too quickly. - Reviewer: Andrea Chisholm, MD - Review Date: 03/2014 - - Update Date: 04/30/2014 -
<urn:uuid:d86f2c26-64de-432d-ac0d-84eb3b3999b7>
seed
"Fluorescence tools that provide high-resolution fluorescence pictures are likely to provide more reliable scores than fluorescence devices that assess via a single spot," wrote the study authors, from the University of California, San Francisco School of Dentistry. "The better visibility of the high-resolution fluorescence imaging could prevent unnecessary operative interventions." They compared several light-based diagnostic modalities -- including fiber-optic transillumination, optical coherence tomography, and fluorescence diagnostic tools -- with the "gold standard" for caries detection: the International Caries Detection and Assessment System (ICDAS II). "In order to easily apply the CAMBRA principles ... it is useful to introduce state-of-the-art sensitive caries diagnostic tools into the dental office armamentarium," the study authors wrote. "If caries lesions are detected early enough in a precavitated stage, intervention methods such as fluoride application, sealants, preventive resin restorations, laser treatment, and antibacterial therapy can be applied to reverse the caries process." For this study, the researchers recruited 100 patients (58 females, 42 males; average age 23.4 ± 10.6 years) presenting with 433 occlusal, unfilled surfaces of posterior teeth (90 bicuspids, 343 molars). Two examiners independently evaluated the patients' ICDAS II scores. They scored 110 fissure areas as sound (ICDAS II code 0), 450 as ICDAS II code 1 (mineral loss in the base of a fissure), and 314 as ICDAS II code 2 (mineral loss extended from the base). Another 107 cases were scored as ICDAS II code 3 (early cavitation with first visual enamel breakdown), while 26 cases were assigned ICDAS II code 4 and/or code 5 (more progressed carious lesions). Fluorescence methods compared Using the Diagnodent (KaVo Dental), the Spectra optical caries detector (Air Techniques), the SoproLife (Acteon), and a quantitative light fluorescence (QLF) research tool, the examiners then evaluated up to five fissure areas on each tooth, for a total of 1,066 areas of interest for each system. The Diagnodent uses red laser light (655 nm) to illuminate regions of a tooth; the emitted light is channeled through the handpiece to a detector, and the device then displays a digital number (1-99) and emits a beeping sound. A higher number indicates more fluorescence and thus a more extensive lesion beneath the surface; a value of 5-10 indicates initial caries in enamel, 10-20 indicates initial caries in dentin, and greater than 20 indicates caries in dentin. In this study, the researchers recorded Diagnodent values between 0 and 10 for 424 of the evaluated pit-and-fissure areas, followed by 291 spots with values between 11 and 20. The remaining 326 measurements showed values between 21 and 99, including 31 areas with a Diagnodent value of 99. The Spectra device utilizes fluorescence from light-emitting diodes in the 405-nm wavelength. When the light is projected onto the tooth surface, cariogenic bacteria fluoresce red, while healthy enamel fluoresces green. An on-screen picture of the tooth includes false coloring and a number scale intended to predict the caries depth: 1.0-1.5 means early enamel caries, 1.5-2.0 is deep enamel caries, 2.0-2.5 is dentin caries, and 2.5 or above signifies deep dentin caries. For this study, a Spectra value of 0 was observed 114 times, while values between 1.0 and 1.9 were displayed 739 times, a value of 2.0 to 2.9 occurred 172 times, and a value of 3.0 to 3.9 was seen in 14 instances (3.9 was the highest value measured). The SoproLife system combines the advantages of a visual inspection method with a high-magnification oral camera and laser fluorescence device. In "daylight mode," white LEDS illuminate the tooth, while in "fluorescence mode" the excitation results from four blue LEDs at 450 nm. In order to classify caries lesions in early stages using the Soprolife, the study authors developed a new scale, where daylight and fluorescence pictures for occlusal fissure areas were each categorized into six different groups, code 0 to code 5. In daylight mode, code 0 is given for sound enamel with no changes in the fissure area. Code 1 is applied if the center of the fissure shows whitish or slightly yellowish change in the enamel. In code 2, the whitish change is wider and extends to the base of the pit-and-fissure system and comes up the slopes of the fissure system in the direction of the cusps. In code 3, fissure areas are rough and slightly open, indicating the beginning of enamel breakdown. In code 4, the caries process is no longer confined to the fissure width, and in code 5 there is obvious enamel breakdown with visible dentin. The scoring is slightly different in SoproLife's blue fluorescence mode. Fluorescence mode code 0 is given when the fissure appears shiny green and the enamel appears sound with no visible changes. Code 1 is selected if a tiny, thin red shimmer in the pit-and-fissure system is observed, which can slightly come up the slopes of the fissure system. In code 2, darker red spots confined to the fissure are visible. For code 3, dark red spots extend as lines into the fissure areas, but are still confined to the fissures. If the dark red (or red-orange) extends wider than the confines of the fissures, a code 4 is assigned. Code 5 is selected if obvious openings of enamel are seen with visible dentin. For this study, and using this new scoring system, in daylight mode the examiners scored 142 pit-and-fissure areas as code 0, 436 as code 1, 165 as code 2, 138 as code 3, 96 as code 4, and 89 as code 5. In fluorescence mode, they scored 242 areas as code 0, 263 as code 1, 224 as code 2, 133 as code 3, 121 as code 4, and 81 as code 5. Finally, QLF uses a 370-nm light source to generate green autofluorescence of the tooth. With QLF, the demineralized area appears opaque and darker than sound enamel. In the current study, mineral loss values were evaluated on 988 sites, with 353 sites showing a mineral loss of less than 10%, 463 sites between 10% and 20%, 131 sites between 20% and 30%, and 42 sites between 30% and 67%. Regression curve analysis Examining the relationship between the ICDAS II scores and the scores obtained using the different diagnostic tools revealed that for each ICDAS II code, each device provided a distinct average score. "To evaluate the ability to discriminate between two different scores for each assessment method, linear regression curves were calculated for each tool," the study authors wrote. "The steeper the regression curve, the higher the ability of a tool to discriminate between two values and the more useful the tool is in clinics." Normalized data linear regression showed that the SoproLife assessment tools yielded the best caries score discrimination, followed by the Diagnodent and the Spectra, they noted. In other words, "when using SoproLife, a judgment call for classification of a lesion into sound, precavitated or cavitated ... is easier to make than with the other tools," they wrote. The new SoproLife daylight and blue fluorescence codes can serve as "a distinct classification" for caries lesions, enabling practitioners to predict the histological depth of caries lesions, they added. "When comparing spot-measuring fluorescence tools with those providing high-resolution fluorescence pictures, the better visibility provided by the high-resolution tools might help prevent unnecessary operative interventions that are based solely on high fluorescence scores," the researchers concluded. "The observation capacity of such a system can guide clinicians toward a more preventive and minimally invasive treatment strategy in the course of monitoring lesion progression or remineralization over time." The authors declared that there was no conflict of interest regarding this study, which did receive grant support from Acteon.
<urn:uuid:4b40fb26-e55d-4037-a156-482d3627819c>
seed
Medications necessary for disease management can simultaneously contribute to weight gain, especially in children. Patients with preexisting obesity are more susceptible to medication-related weight gain. How equipped are primary care practitioners at identifying and potentially reducing medication-related weight gain? To inform this question germane to public health we sought to identify potential gaps in clinician knowledge related to metabolic adverse drug effects of weight gain. The study analyzed practitioner responses to the pre-activity questions of six continuing medical education (CME) activities from May 2009 through August 2010. The 20,705 consecutive, self-selected respondents indicated varied levels of familiarity with adverse metabolic effects and psychiatric indications of atypical antipsychotics. Correct responses were lower than predicted for drug indications pertaining to autism (−17% predicted); drug effects on insulin resistance (−62% predicted); chronic disease risk in mental illness (−34% predicted); and drug safety research (−40% predicted). Pediatrician knowledge scores were similar to other primary care practitioners. Clinicians’ knowledge of medication-related weight gain may lead them to overestimate the benefits of a drug in relation to its metabolic risks. The knowledge base of pediatricians appears comparable to their counterparts in adult medicine, even though metabolic drug effects in children have only become prevalent recently. Keywords:Medication effects on appetite; Insulin resistance; Drug-related weight gain; Mental illness as a risk factor for obesity; Adverse metabolic drug effects; Drug safety research; Nutrition knowledge of primary care practitioners No study to date assesses the knowledge base around medication-related weight gain in pediatric or adult primary care medicine. We therefore sought to characterize what practitioners know about metabolic drug effects in the context of clinical decision-making. Informed clinicians can often modify their patients’ risk of adverse metabolic drug effects, even when medications are essential for disease management . Practitioners can choose lowest effective dosing and therapies with fewer metabolic effects; treat underlying medical conditions which can contribute to weight gain, such as sleep apnea and hypothyroidism; correct nutritional deficiencies such as vitamins B12 and D to facilitate lifestyle adherence; and counsel patients on drug-related increases in appetite, emphasizing adherence to medication and healthful lifestyle choices. Among the patient groups most vulnerable to metabolic drug effects are children. Children are more susceptible to central nervous system effects of medications . Some metabolic drug effects are unique to children at certain growth stages and demonstrate a prolonged effect [5,6]. Metabolic drug effects also tend to be delayed relative to the therapeutic benefit, especially in children. Concurrently, drug exposure is increasing in children, the age group with the fastest growing number of prescriptions , in part due to obesity-related chronic diseases. Preexisting overweight and obesity heighten vulnerability to metabolic drug effects. Managing adverse metabolic drug affects is relatively new to the practice of pediatrics. Historically pediatricians focused on medication-related weight loss and stunting, recorded as step-offs on patient growth charts. Today’s pediatric practice may require as diligent a diagnosis and management of medication-related weight gain, especially since preexisting overweight and obesity, defined as a body mass index at or above the 85th percentile, has reached approximately 32% of the U.S. population ages 2-19 [8,9]. Disseminating drug safety updates to pediatricians holds other challenges as well. Safety information specific to children represents a recent advance. Practitioners may not realize they need to watch for such updates . Metabolic drug effects specific to children and adolescents may be first identified years after a drug is on the market because the metabolic effects in children tend to manifest beyond the timeframe of clinical trials. Disseminating drug safety information may be additionally complicated by practice patterns. For example, psychiatrists may diagnose and prescribe highly specialized treatment and look to primary care practitioners to monitor patients for adverse drug effects. Clinicians draw on their knowledge base of adverse metabolic drug effects for clinical decision-making. Elevated and unique risks of metabolic drug effects and major shifts in disease prevalence and practice patterns in pediatrics together prompted our interest in confirming that primary care clinicians who care for children have a knowledge base comparable to their adult medicine counterparts. Continuing medical education (CME) activities were developed in partnership with CME providers. Inclusion criteria for partners were: experience implementing pre-activity questions, having primary care practitioners as a target audience, willingness to co-develop programs relevant to medication-associated weight gain, providing free public access to associated media and print materials, and collaborating within time and budget constraints. Partners were selected across different media - audio, lectures, or web-based activities - Audio-Digest Foundation, Medscape CME, The Maryland Academy of Family Physicians, and The FDA Scientific Rounds Program. The instrument in this study, pre-CME activity questions, measures practitioners’ baseline knowledge relevant to the content of the CME activities. Pre-activity questions were 4-choice multiple choice questions or true-false questions. They were directed at clinical decision-making and were organized into four categories: 1) drug indications, 2) metabolic drug effects, 3) drug safety updates, and 4) patients most at risk. Each CME program partner selected among the pre-activity questions and adapted the wording to their standard format. The 6 CME activities pertained to either atypical antipsychotic use in children or obesogenic medications in general. They were provided through the CME partners at varied intervals between June 2009 and August 2010. Each activity was an audio program, web-based program, or conference lecture; and awarded a maximum of 0.5 to 2 Category 1 CME credits. The program content, participant characteristics, and pre-activity questions are presented in Table 1. Table 1. Summary of continuing medical education (CME) programs, June 2009 – August 2010 In order to compare the knowledge of practitioners specializing in pediatrics with adult medicine practitioners, we developed Activity 5 which is applicable to the care of children and adults. Activity 6 was the only program where the target audience was not primary care practitioners. The biweekly activity is attended by a diverse group of health care practitioners and scientists, all of whom work in regulation. The activity was included to better characterize practitioner knowledge of the autism indication for atypical antipsychotics. The information used for the response analysis was obtained from the CME providers as anonymized source data with no way to match responses with individuals. No personal identifiers were used. The respondents were participating in CME activities, where responses to the related learning questions are routinely aggregated to inform future CME development and related research. We analyzed the data with and without comparing it to predicted scores. Predicted scores facilitate comparison between multiple choice questions with four choices and binomial true-false questions, which differ in the likelihood of selecting a correct answer by chance alone. For this analysis, predicted scores were 70% for multiple choice and 85% for true-false questions. The basis for these numbers comes from Audio-Digest’s overall average pretest scores, which are 70%, [personal communication August 2010] and the pedagogic intent of a CME to build on participants’ existing practice-relevant knowledge. Each response holds an inherent error, since a participant with a constant knowledge base could score better or worse on the pre-activity depending on the circumstances at that moment. We estimated the two-tailed two standard deviations of this variability to be ten percent. We also analyzed the participants’ responses to the choice identified as close to correct, also called the second best answer. STATA® statistical software was used to run discrete-response regression analyses on pre-activity question responses. Probit regressions were used for binomial dependent variables, analyzing whether the respondent answered the CME question correctly. The probit models give a standard normal z-score rather than a t-statistic, with the total variability explained as a pseudo-R2 rather than a normal R2. McFadden’s pseudo-R2 is reported. The probit analysis reports the overall significance of the model using an LR chi-square. The effect of a control variable in predicting correct responses (a certain percentage above/below the average) is calculated as the difference in probability of getting a question correct versus a baseline probability. For this analysis, baseline probability is where all control variables are set to their population means. The control variables used in the probit models were educational degree, medical specialty, and CME participation date. Geographic region was only provided by some respondents and was therefore not included in the analysis. The type of medical practice such as hospital-based or solo practice was not among the data obtained by the CME providers. Partial incomplete responses were included in the analysis. Having all pre-activity questions left blank was considered equivalent to nonparticipation, and these entries were excluded. Since the sample size of the distinct CME activities varied, both the unweighted and weighted averages of correct responses are reported. To assess the extent to which the pre-activity responses could be generalized among primary care practitioners, the responses were compared across the diverse CME programs detailed in Table 1. The scores on pre-activity responses were compared to self-reported learning in Activities 2–3, where participants were asked, “Please list one concept or strategy gained from this activity.” Participant evaluations of the CME programs were recorded, to confirm satisfactory evaluations. The rating of the CME activity on a 1–5 Likert scale is a composite score which reflects practice relevance and appropriate teaching level of target population. Not all participants completed all pre-activity questions. The data was analyzed both including and excluding question-specific non-responders, to detect a potential bias introduced by partial completion. Activities 2 and 3 were the longest-running programs, each offered for 13 months. They were analyzed for a temporal trend, since a news story or regulatory change during the interval could potentially change practitioner baseline knowledge or practice patterns. There were 20,705 participants in the combined six CME activities which spanned 15 months. Each participant answered one or more of the following questions. See Table 2. For the first question, both the average correct response rate of 76% and the weighted correct response of 79% are within the predicted range. For the second question, the average correct response rate is 53% (17% below predicted) and the weighted average correct response rate is 52% (18% below predicted). Table 2. Responses to multiple choice pre-activity questions on use of antipsychotic medications See Table 3. The average correct response rate is 65% (20% below predicted) and the weighted average is 67% (18% below predicted). Table 3. Responses to the true-false pre-activity question on use of antipsychotic medications The participants in Activity 6 were asked: Recommended treatment of autism includes all EXCEPT: A Correct nutritional deficiencies 6 (12%) B Treatment of concurrent attention-deficit hyperactivity disorder 5 (10%) C Prescribe atypical antipsychotics 29 (57%) D Use behavioral therapies following early diagnosis 11 (21%) The rate of correct response, response C, is 57% (13% below predicted). Adverse metabolic effects Participants in Activity 5 were asked to respond to: After diagnosing Ed with metabolic syndrome, Ed’s doctor advised him to reduce his weight by 10%, a total of 18 pounds, by diet and exercise. Which of the following medications potentially makes it more difficult for Ed to achieve his goal? A Angiotensin-converting enzyme inhibitors 6667 (41%) B Diuretic 1243 (8%) C Vitamin D 1106 (7%) D Biguanide 7021 (43%) The rate of correct response, choice B, is 8% (62% below predicted). Specialty did not predict response to this question. Participants in Activity 5 were also asked to respond to: Within months of being diagnosed with bipolar disorder at age 14, Sara gained 20 pounds. Which of the following is likely to contribute to her recent weight gain and body mass index of 28? A Vitamin D deficiency 952 (6%) B Atypical antipsychotic agent 10349 (63%) C An eating disorder 1072 (7%) D Psychostimulant agent 3948 (24%) The average correct response rate of 63% falls within the predicted range. Predicted probability of answering the question correctly given the regression control variables is 65%. Analysis by specialty indicates that mental health specialists scored 28% better than average (z=27; p<0.01), family practitioners scored 14% higher (z=12; p<0.01), internal medicine specialists scored 9% higher (z=7; p<0.01), endocrinologists scored 8% higher (z=3; p<0.01). The regression explains 9% (pseudo-R2=0.09) of the total variability in responses and was very significant in predicting scores (LR chi-square=1833; p<0.01). Table 4 indicates the responses to a pre-activity question on adverse drug effects. See Table 5. For the first question, the average correct response rate was 61% with a weighted correct response average of 67%. For the second question, the average correct response rate was 75% with a weighted average also of 75%. These are within the predicted range. Table 4. Responses to the true-false pre-activity question on adverse drug effects Table 5. Responses to multiple choice pre-activity questions on adverse drug effects Patients at increased risk Responses to a question about vulnerable populations are presented in Table 6. The average correct response rate is 36% (34% below predicted) with a weighted average of 33% (37% below predicted). Table 6. Responses to the pre-activity question on vulnerable populations Figure 1 illustrates the correct responses compared to the predicted responses for the pre-activity question on mental illness and chronic disease risk. The responses are presented across CME activities 1–5. Standard error bars are shown. Since one of the three incorrect responses (Choice C) in the question was close to the correct answer, it may reflect a stronger knowledge base than the other two incorrect choices. We therefore included this response in the figure. Figure 1. Responses to pre-activity question on chronic disease risk in mental illness. Activity 5’s large sample size allowed for further analysis. Participants had a predicted probability of 31% in answering correctly. Participants specializing in mental health scored 14% higher than average (z=11; p<0.01) and family practitioners scored 4% higher (z=3; p<0.05). The probit regression explained 1% (pseudo-R2=0.01) of the total variability in the responses and was significant (LR chi-square=202, p<0.01). Drug safety updates Participants in Activity 5 were asked to respond to: Which of the following statements is correct? A. Comparative effectiveness trials are part of the drug approval process [5622 (34%)] B. Phase 3 clinical trials are powered to identify appetite-stimulating effects of medication [3346 (20%)] C. Incidence of weight gain can be calculated from a passive adverse events reporting system [4424 (27%)] D. Current legislation requires clinical trials in pediatric populations [2452 (15%)] On average, 15% of participants answered correctly (55% below predicted) selecting choice D. Predicted probability of answering correctly given the regression variables is 14%. Analysis by specialty indicates that pediatricians scored 7% higher than average (z=6; p<0.01) and mental health specialists scored 2% higher (z=2; p<0.02). General practitioners scored 3% below average (z=−2; p<0.03) and emergency medicine specialists scored 4% lower (z=−2; p<0.04). The regression explained 2% of the total variability in answers (pseudo-R2=0.02) and was very significant (LR chi-square=242; p<0.01). Note that this question had the highest non-response rate, with 517 (3%) of participants leaving the question blank. Regression analysis excluding non-responders had the same significant outcome variables as the analysis which included non-responders. See Table 7. The average correct response was 47% (23% below predicted) and the weighted average was 51% (19% below predicted). Table 7. Responses to pre-activity questions on drug safety information For Activity 5 (n=16,361), the top three professions of participants were nurse practitioners (52%, n=8407), physicians (38%, n=6212), and physician assistants (3%, n=476). The top specialties were psychiatry/mental health (12%, n=2022), family medicine (11%, n=1875), internal medicine (10%, n=1639), general practice (6%, n=946), and pediatrics (6%, n=906). We controlled the regression analysis for predicting correct pre-activity responses for specialty, professional degree, and date of CME participation by quartile. The time of participation was included in the regression analysis because it explained a significant portion of the variability but yielded no clear pattern for interpretation. Results of the regression analysis follow each applicable question. The results of the analysis by specialty concur with the practice demands of each specialty. For example, family physicians, practitioners who follow patients across the lifespan, were more likely to correctly identify the profound extent to which mental illness shortens life expectancy due to chronic diseases. The strength of the instrument is its ease of use in the context of CME programming, and its associated ability to identify trends in practitioner knowledge and some broad comparisons among practitioners. However since the instrument is comprised of multiple choice questions, responses to any one question are more appropriately viewed in the context of the full instrument. In order to assess the variability of the instrument, responses were compared across CME programs which varied in content, timeframe, recruitment, and question administration. Figure 1 depicts the responses. Responses among CME programs varied within the pre-established +/−10% test error, except for one program with a small sample size. The unweighted, correct response averages across CME programs are reported. To assess the extent to which recruitment methods may influence the pre-activity responses, the overall scores of the two Audio-Digest programs were compared. The two programs differ only in how the participants were recruited. They were recruited as either, subscribers or one-time participants. The 10% difference in responses falls within the pre-established test error. Practice-relevance and the perception of the CME program’s usefulness were considered in the instrument analysis. The participants in each of the 6 activities were asked to evaluate the program on a 1–5 Likert scale, 5 being the highest score. The ratings for each program ranged from 4.0 to 5.0, with an unweighted mean score of 4.5, suggesting that all were well-received and applicable to participants’ clinical practice. The pre-activity question responses were correlated with what participants said they learned from the activity. Participants in Activities 2 and 3 were asked to “Please list one concept or strategy gained from this activity.” The written responses fell into categories consistent with pre-activity responses: Pediatric indications (20), patient adherence (2), adverse effects (71), MedWatch reporting (9), drug interactions (5), and patient risk factor (8). A temporal trend in correct responses was not observed between the first three months and the total 13 months of the responses to Activities 2 and 3. Neither was any single news event or regulatory change identified which might be anticipated to influence practitioner knowledge on this topic during the study period. The childhood obesity epidemic is recent; however, practitioners who care for children appear as familiar with adverse metabolic drug effects as practitioners who care for adults. Those specializing in pediatrics performed better on a question about drug research, perhaps reflecting recent educational activities directed towards pediatricians. Across medical specialties practitioner knowledge of medication-related weight gain was low in four areas of our study. Each of the knowledge gaps if practice relevant, would overestimate the medication’s benefits or underestimate adverse metabolic effects. The net effect of each knowledge gap would therefore affect clinical decision-making in the same direction, potentially contributing to excess metabolic dysfunction. The four areas of low practitioner knowledge are as follows: Responses to questions about drug indications and the use of antipsychotics in autism suggest that some practitioners may mistake the management of aggressive symptoms for treatment of the underlying disease process. Additionally, new oral preparations are available for children who have difficulty swallowing pills. These preparations should be used before prescribing intramuscular preparations, which have greater metabolic effects and do not have pediatric use indications at the time this manuscript is written. Among the questions pertaining to adverse metabolic effects, only 8% of practitioners selected the intended response that some diuretics have been associated with promoting insulin resistance. The 41% who incorrectly selected “angiotensin-converting enzyme inhibitors” are unlikely to have been aware that some diuretics promote insulin resistance and angiotensin converting enzymes, in contrast, may be insulin sensitizing , the distinction between these two antihypertensive therapies in a patient with metabolic syndrome would be practice relevant. Furthermore, these respondents may have erroneously equated the reduction in peripheral edema with meaningful, long-term weight loss among their patients. The 43% of practitioners who incorrectly selected “biguanide” may not have realized that metformin is in this medication class, so the response would have been more informative if the answer had read “biguanide (metformin).” Responses reflected low baseline knowledge of drug safety research and MedWatch, a passive surveillance program. It is possible that practitioners lack a framework for managing the escalating drug-related information. Our findings parallel those of a recent study of physician knowledge and adverse events reporting of dietary supplements . Mental illness is associated with increased vulnerability to adverse metabolic effects. The profound chronic disease mortality among patients with mental illness was under-recognized across specialties as measured by our instrument. Awareness of the high mortality from chronic diseases among patients with mental illness might be unlikely to cause practitioners to alter the patient’s psychiatric medications; however, it would guide overall care such as screening, referring, concurrent medication prescribing, and managing co-morbidities. The knowledge gaps parallel the few peer-reviewed publications on medication-related weight gain, other than the atypical antipsychotics. Reviewing the proceeding of a large international conference on obesity revealed a similar paucity of research and translational initiatives surrounding medication-related weight gain. Additionally, current drug product information and labeling lacks a consistent format or location for communicating the potential effects of a drug on the patient’s appetite and underlying metabolism. Practitioners were familiar with the general indications for use of atypical antipsychotics in children and the adverse metabolic effects including prolactinemia, dyslipidemia, elevated liver enzymes, insulin resistance, and weight gain, findings which correlate with the lay and medical literature’s recent attention to the topic . Similarly, education initiatives about pediatric drug labeling have been directed to pediatricians and pediatricians were more knowledgeable than other practitioners about ongoing pharmacovigilance. The instrument demonstrated internal consistency across diverse CME programs (Table 1), suggesting the findings may appropriately be generalized across U.S. primary care practitioners. The sampling frame captures participants across the United States, with diverse patient populations in diverse practice settings. The degrees of the participants, nurse practitioners, physician assistants, and medical doctors, correctly represent the educational diversity of primary care practitioners. Pediatricians scored as well as their adult medicine counterparts, suggesting that future initiatives could appropriately be directed to all primary care practitioners. Additional merits of the instrument are that it can be implemented in a timely and cost-sensitive way. It can be applied to assess evidence-based practice knowledge . Study findings can provide baseline data, by which to gauge the effectiveness of future interventions. The instrument also provides a continuing education curriculum developed free of industry interests. An internet curriculum on safe medication use measurably improved clinician practice choices [19,20]. Knowledge is one of many clinical practice barriers to modifying medication-related weight gain, and merits incorporation into future initiatives. The findings, taken with the population prevalence of obesity, the emerging treatment options, and the central role of the primary care practitioner, suggest a significant prevention opportunity. Pediatricians’ knowledge base of adverse metabolic drug effects appears comparable to their counterparts in adult medicine. Regardless of medical specialty, practitioners participating in the CME programs reflected low knowledge on specific questions pertaining to drug indications, adverse metabolic effects, patient risk profiles and safety updates. Each of the four knowledge gaps would potentially influence clinical decision-making in the same manner, leading clinicians to overestimate the benefits of a drug in relation to its metabolic risks. Therefore future efforts to detail cross-specialty practitioner knowledge of metabolic drug effects and initiate education strategies to bolster knowledge could meaningfully contribute to obesity prevention. Availability of supporting data The full instrument (CME questions from all activities) is available at the journal’s request. Neither author has financial or non-financial competing interests. IK developed the CME modules in collaboration with colleagues acknowledged elsewhere in the manuscript and published CME materials. She designed the study in collaboration with the Office of Pediatric Therapeutics and drafted the manuscript. GW participated in the design of the study and performed the statistical analysis. Both authors read and approved the final manuscript. IK is a physician nutrition specialist board-certified in preventive medicine and public health. She is the editor of Advancing Medicine with Food and Nutrients, Second Edition (CRC Press, December 2012) and serves on the faculty of Johns Hopkins Bloomberg School of Public Health. As an inaugural FDA Commissioner’s Fellow she worked within the Office of Pediatric Therapeutics on nutrition-related issues, which gave rise to this research collaboration. We thank Anne Myers for her background work on pediatrician focus groups; Lon Osmond, Executive Director, Audio-Digest Foundation for his collaboration; Michelle Surrichio of the American College of Preventive Medicine for her technical assistance; and Rachel Barr of the Maryland Academy of Family Physicians for her conference preparations. This manuscript is a professional contribution developed by its authors. No endorsement by the FDA is intended or should be inferred. Centers for Disease Control and Prevention 2010. 2010. Available at: http://www.cdc.gov/nchs/data/hestat/obesity_child_07_08/obesity_child_07_08.htm webcite. [accessed Oct 17, 2010]
<urn:uuid:d7ed134d-0229-42a2-96f8-b7a31bef17c8>
seed
(Medical Xpress)—Despite early optimistic studies, the promise of curing neurological conditions using transplants remains unfulfilled. While researchers have exhaustively cataloged different types of cells in the brain, and also the largely biochemical issues underlying common diseases, neural repair shops are still a ways off. Fortunately, significant progress is being made towards identifying the broader operant principles that might bear on any one disease work-around. A review just published in Science focuses on recent work on transplanting interneurons—a diverse family of cells united by their mutual love of inhibition and their local loyalty. The UCLA-based authors, reach the conclusion that the fate of transplanted neurons ultimately depends less on the influences of the new host environment, and more on the early upbringing of the cells within the donor embryo. Interneurons are born in the lateral (LGE) and medial ganglionic eminence (MGE). Those that eventually colonize the cortex need to migrate a fairly long distance tangentially to get there, but once they arrive, they prefer to extend only local connections. By comparison, the excitatory pyramidal cells which end up sending long-range projections, are born within the cortex itself. Researchers have found that only those interneurons from the MGE have what it takes to make long migratory journeys. LGE neurons, when transplanted into postnatal host brains, remain in tight clumps whereas those from the MGE disperse throughout the cortex. More importantly, it is now appreciated that transplanted interneurons closely follow cell-intrinsic programs rather than relying on host-specific cues to govern their survival and differentiation. The once popular conception of a life and death competition for neurotrophic factors, if at play here at all, seems to be a minor influence. Herds of transplanted neurons are still thinned in the host, for example, but those that die off do so asynchronously from the endogenous interneurons, and in line with their own internal programming. Scrutinous cell accounting has shown that after transplantation, the total number of interneurons within the host tissue greatly exceeds the nominal amount normally found. An excess balance of inhibitory cells has been seen as desireable from the point of view of treating mismatches in excitability of the kind found in diseases like epilepsy. It is important to realize however, that binary electrical tallies only represent one aspect of neuronal function. Furthermore, in epilepsy, we might more generally view exciteability as just the readily observeable tip of underlying metabolic imbalance. None the less, suppression of spontaneous seizures in a mouse channelopathy model (mutant for a potassium channel known as Kv1.1) has been acheived with interneuron tranplants. In yet another case of nomenclature gone wild, this particular mutant has been associated with human interneuronopathies leading to severe tonic-clonic seizures. Synapse constitution—number, type or strength of synapses—can be tough to quantify objectively and exactly. There have been indications that transplanted interneurons make 3X the number of synapses as native interneurons, but they are only one-third as strong as would be normally expected. The keyword here is "strong." There can be any combination of synaptic capabilities involved in this idea, things like electrical amplititude, reliability, or persistence at a high rate of firing all come into play in the idea of strength. The Chandelier cells control the axon initial segments well known and idiosyncratic interneuron known as the Chandelier cell, for example, commands access to the highly coveted axon initial segment where it effectively exercises complete veto over its associated pyramidal cell. To increase the efficiency and fidelity of harvesting exact precursor cell subtypes, techniques like fluorescence-activated cell sorting (FACS) have been used in sample processing. Flourescent proteins under the control of forebrain or MGE specific promotors can be used to select individual cells types for later transplantation. To bias cells into somatostatin or parvalbumin-expressing populations, for example, wild-type MGE cells can then be exposed to sonic hedgehog or other fate-ruling factors. Transplanting different kinds of cells together will probably be necessary to properly treat many diseases. Even non-neural cells like astrocytes and microglia may be critical to have in the mix. Exciting results obtained in mice last year, indicate that these cell types can thrive not just when transplanted across individuals but across species. The goal for the present time is to define good protocols for integrating one cell type first. Nimble cells that migrate well within the host, yet confine their influence to the local environment might be the most sensible place to start. More information: Interneurons from Embryonic Development to Cell-Based Therapy, Science 11 April 2014: Vol. 344 no. 6180. DOI: 10.1126/science.1240622
<urn:uuid:876e972c-1751-4870-841c-787c54621d9d>
seed
|Home | About | Journals | Submit | Contact Us | Français| Over the past decade, researchers have shifted their focus from documenting health care disparities to identifying solutions to close the gap in care. Finding Answers: Disparities Research for Change, a national program of the Robert Wood Johnson Foundation, is charged with identifying promising interventions to reduce disparities. Based on our work conducting systematic reviews of the literature, evaluating promising practices, and providing technical assistance to health care organizations, we present a roadmap for reducing racial and ethnic disparities in care. The roadmap outlines a dynamic process in which individual interventions are just one part. It highlights that organizations and providers need to take responsibility for reducing disparities, establish a general infrastructure and culture to improve quality, and integrate targeted disparities interventions into quality improvement efforts. Additionally, we summarize the major lessons learned through the Finding Answers program. We share best practices for implementing disparities interventions and synthesize cross-cutting themes from 12 systematic reviews of the literature. Our research shows that promising interventions frequently are culturally tailored to meet patients’ needs, employ multidisciplinary teams of care providers, and target multiple leverage points along a patient’s pathway of care. Health education that uses interactive techniques to deliver skills training appears to be more effective than traditional didactic approaches. Furthermore, patient navigation and engaging family and community members in the health care process may improve outcomes for minority patients. We anticipate that the roadmap and best practices will be useful for organizations, policymakers, and researchers striving to provide high-quality equitable care. The online version of this article (doi:10.1007/s11606-012-2082-9) contains supplementary material, which is available to authorized users. In 2005, the Robert Wood Johnson Foundation (RWJF) created Finding Answers: Disparities Research for Change (www.solvingdisparities.org) as part of its portfolio of initiatives to reduce racial and ethnic disparities in health care.1 RWJF charged Finding Answers with three major functions: administer grants to evaluate interventions to reduce racial and ethnic disparities in care, perform systematic reviews of the literature to determine what works for reducing disparities, and disseminate these findings nationally. Over the past seven years, Finding Answers has funded 33 research projects and performed 12 systematic literature reviews, including the five papers in this symposium.2–6 We are now beginning to leverage this research base to provide technical assistance to organizations that are implementing disparities reduction interventions, such as those participating in RWJF’s Aligning Forces for Quality program.7 This paper summarizes the major lessons learned from the systematic reviews and provides a disparities reduction framework. Building on our prior work,8–10 we present a roadmap for organizations seeking to reduce racial and ethnic disparities in health care. This roadmap may be tailored for use across diverse health care settings, such as private practices, managed care organizations, academic medical centers, public health departments, and federally qualified health centers. Specifically, we outline the following steps: The five systematic reviews in the present symposium examined interventions to improve minority health and potentially reduce disparities in asthma, HIV, colorectal cancer, prostate cancer, and cervical cancer.2–6 While many valuable ideas to address racial and ethnic health disparities are being pursued outside of the healthcare system, Finding Answers focuses specifically on what can be accomplished once regular access to healthcare services is achieved. Thus, the reviews focused on interventions that occur in or have a sustained linkage to a healthcare delivery setting; programs that were strictly community-based were outside the scope of the project. Additionally, the reviews examined racial and ethnic disparities in care and improvements in minority health, rather than geographic, socioeconomic, or other disparities. For a description of search strategies employed in these reviews, see the technical web appendix which can be accessed online (Electronic Supplementary Material). Each review identified promising practices to improve minority health within the healthcare setting. The asthma paper found that educational interventions were most common, with culturally tailored, skills-based education showing promise.5 Outpatient support, as well as education for inpatient and emergency department patients, were effective. Similarly, the HIV review noted that interactive, skills-based instruction was more likely to be effective than didactic educational approaches for changing sexual health behavior.3 The paper identified a dearth of interventions that target minority men who have sex with men. The colorectal cancer review found that patient education and navigation were the most common interventions and that those with intense patient contact (e.g., in person or by telephone) were the most likely to increase screening rates.4 The colorectal cancer review identified no articles that described interventions to reduce disparities in post-screening follow-up, treatment, survivorship, or end-of-life care. Based on low to moderate evidence, the cervical cancer review reported that navigation combined with either education delivered by lay health educators or telephone support can increase the rate of screening for cervical cancer among minority populations.2 Telephone counseling might also increase the diagnosis and treatment of premalignant lesions of the cervix for minority women. The prostate cancer review focused on the importance of informed decision making for addressing prostate cancer among racial and ethnic minority men.6 Educational programs were the most effective intervention for improving knowledge among screening-eligible minority men. Cognitive behavioral strategies improved quality of life for minority men treated for localized prostate cancer. However, more research is needed about interventions to improve informed decision making and quality of life among minority men with prostate cancer. We looked across these reviews and Finding Answers’ previous research,11–17 and identified several cross-cutting themes. Our findings showed that promising interventions frequently were multi-factorial, targeting multiple leverage points along a patient’s pathway of care. Culturally-tailored interventions and those that employed a multi-disciplinary team of care providers also tended to be effective. Additionally, we found that education using interactive methods to deliver skills training were more effective than traditional, didactic approaches in which the patient was a passive learner. Patient navigation and interventions that actively involved family and community members in patient care showed promise for improving minority health outcomes. Finally, the majority of interventions targeted changing the knowledge and behavior of patients, generally with some form of education. Interventions directed at providers, microsystems, organizations, communities, and policies were far less common, thus representing an opportunity for future research. Table 1 summarizes the major steps health care organizations need to undertake to reduce disparities. Past efforts have focused on Step 1 (e.g. collecting performance data stratified by race, ethnicity, and language) or Step 4 (designing a specific intervention). Our roadmap highlights that these are crucial steps, but will have limited impact unless the other steps are addressed. Effective implementation and long-term sustainability require attention to all six steps. When health care organizations and providers realize there are disparities in their own practices,18 they become motivated to reduce them.19 Therefore, the Patient Protection and Affordable Care Act of 2010 makes the collection of performance data stratified by race, ethnicity, and language (REL) a priority.20 Similarly RWJF’s Aligning Forces for Quality Program initially focused its disparities efforts on the collection of REL data in different communities. The Institute of Medicine (IOM) recently recommended methods to collect REL data,21 and groups such as the Health Research and Educational Trust (HRET) have developed toolkits to guide organizations in this effort.22 Besides race-stratified performance data, training in health disparity issues (e.g., through cultural competency training) may help providers identify and act on disparities in their own practices. However, while cultural competency training and stratified performance data may increase the readiness of providers and organizations to change their behavior,19 these interventions will need to be accompanied by more intensive approaches to ameliorate disparities. Sequist et al. found that cultural competency training and performance reports of the quality of diabetes care stratified by race and ethnicity increased providers’ awareness of disparities, but did not improve clinical outcomes.23 Therefore, our roadmap for reducing disparities highlights the importance of combining REL data collection with interventions targeted towards specific populations and settings. Interventions to reduce disparities will not get very far unless there is a basic quality improvement structure and process upon which to build interventions.24,25 Basic elements include a culture where quality is valued, creation of a quality improvement team comprised of all levels of staff, a process for quality improvement, goal setting and metrics, a local team champion, and support from top administrative and clinical leaders. If robust quality improvement structures and processes do not exist, then they must be created and nurtured while disparities interventions are developed. For too long, disparities reduction and quality improvement have been two different worlds. People generally thought about reducing disparities separately from efforts to improve quality, and oftentimes different people in an organization were responsible for implementing disparity and quality initiatives. A major development over the past decade is the increasing recognition that equity is a fundamental component of quality of care. Efforts to reduce disparities need to be mainstreamed into routine quality improvement efforts rather than being marginalized.26 That is, we need to think about the needs of the vulnerable patients we serve as we design interventions to improve care in our organizations, and address those needs as part of every quality improvement initiative. The Institute of Medicine’s Crossing the Quality Chasm report stated that equity was one of six components of quality,27 and the IOM’s 2010 report Future Directions for the National Healthcare Quality and Disparities Reports highlighted equity further by elevating it to a cross-cutting dimension that intersects with all components of quality care.28 Major health care organizations have instituted initiatives that promote the integration of equity into quality efforts including the American Board of Internal Medicine (Disparities module as part of the recertification process), American College of Cardiology (Coalition to Reduce Racial and Ethnic Disparities in Cardiovascular Disease Outcomes [CREDO] initiative),29 American Medical Association (Commission to End Health Care Disparities), American Hospital Association (Race, ethnicity, and language data collection),22 Joint Commission (Advancing Effective Communication, Cultural Competence, and Patient- and Family-Centered Care: a Roadmap for Hospitals),30 and National Quality Forum (Healthcare Disparities and Cultural Competency Consensus Standards Development). For many health care organizations and providers, this integration of equity and quality represents a fundamental change from generic quality improvement efforts that improve only the general system of care, to interventions that improve the system of care and are targeted to specific priority populations and settings. While several themes have emerged regarding successful interventions to reduce health care disparities based on our systematic reviews and grantees, solutions must be individualized to specific contexts, patient populations, and organizational settings.31 For example, solutions for reducing diabetes disparities for African-Americans in Chicago may differ from the answers for African-Americans in the Mississippi Delta. We recommend determining the root causes of disparities in the health care organization or provider’s patient population and designing interventions based on a conceptual model that targets six levels of influence: patient, provider, microsystem, organization, community, and policy (Table 2).8,9 Each level represents a different leverage point that can be addressed to reduce disparities. The relative importance of these levels may vary across diverse organizations and patient populations. Specific intervention strategies can then be developed to target different levels of influence. Table 3 offers an overview of strategies identified through the review of approximately 400 disparities intervention studies, including the 33 Finding Answers projects and 12 systematic literature reviews. Common intervention strategies include delivering education and training, restructuring the care team, and increasing patient access to testing and screening. About half of the interventions targeted only one of the levels of influence described above; most efforts were directed at patients in the form of education or training. Research evaluating pay-for-performance, on the other hand, was scant and requires further attention, especially given current interest in incentive-based programs. Going forward, Finding Answers aims to categorize each of the approximately 400 studies by level of influence and strategy, and to identify which combinations are promising for disparities reduction. Organizations can find practical resources and promising intervention strategies on the Finding Answers website (www.solvingdisparities.org) or the Agency for Healthcare Research and Quality (AHRQ) Health Care Innovations Exchange (www.innovations.ahrq.gov). Systematic reviews such as those by Finding Answers and forthcoming ones from the AHRQ Evidence-Based Practice Center Program and the Veterans Administration can inform what types of interventions are most appropriate in different situations. In addition, organizations can learn about successful projects from peers through learning collaboratives,24 site visits, case studies, and webinars. While there is no silver bullet to reduce disparities, successful interventions reveal important themes. As previously noted, we looked across 12 systematic reviews of the literature and identified promising practices that can inform the design of future disparities interventions.2–6,11–17 These include culturally tailoring programs to meet patients’ needs, patient navigation, and engaging multidisciplinary teams of care providers in intervention delivery. Effective interventions frequently target multiple leverage points along a patient’s pathway of care and actively involve families and community members in the care process. Additionally, successful health education programs often incorporate interactive, skills-based training for minority patients. The National Institutes of Health recently held its fifth annual conference on the science of dissemination and implementation to promote further research in this field, create opportunities for peer-to-peer learning, and showcase available models and tools. One such model is the Consolidated Framework for Implementation Research (CFIR), for which Damschroder et al. reviewed conceptual models of relevant factors in implementing a quality improvement intervention and synthesized existing frameworks into a single overarching model.32 The CFIR covers five domains: intervention characteristics (e.g. relative advantage, adaptability, complexity, cost), outer setting (e.g. patient needs and resources, external policy and incentives), inner setting (e.g. culture, implementation climate, readiness for implementation), characteristics of the individuals involved (e.g. knowledge and beliefs about the intervention, self-efficacy, stage of change), and the process of implementation (e.g. planning, engaging, executing, evaluating). Too often organizations focus on the content of an intervention without planning its implementation in sufficient detail. A model such as CFIR supplies a checklist of factors to consider in implementing an intervention to reduce disparities. Through work with our 33 grantees, we have developed a series of best practices for implementing interventions to reduce disparities. These lessons were pulled from detailed qualitative data gathered through the Finding Answers program, and represent perspectives from organization leadership, providers, administrators, and front-line staff. We found common implementation challenges and solutions across health care settings. Table 4 summarizes best practices for disparities reduction efforts, provides the rationale and expected outcomes, and offers recommended strategies for delivering a high-quality equity initiative. Implementation is an iterative process and organizations are unlikely to get the perfect solution on their first attempt. Thus, evaluation of the intervention and adjustments to the program based on performance data stratified by race, ethnicity, and language are integral parts of the implementation process. Setting realistic goals is essential to accurately assess program effectiveness. Processes of care (e.g. measurement of hemoglobin A1c in patients with diabetes) generally improve more rapidly than patient outcomes (e.g. actual hemoglobin A1c value), and may therefore be better markers of short-term disparities reduction success, while outcomes could be longer-term targets. Health care organizations, administrative leaders, and providers need to proactively plan for the sustainability of the intervention. Sustainability is dependent upon institutionalizing the intervention and creating feasible financial models. Too often interventions are dependent upon the initial champion and first burst of enthusiasm. If that champion leaves the organization or if staff tire after the early stages of implementation, then the disparities initiative is at risk for discontinuation. Institutionalization requires promoting an organizational culture that values equity, creating incentives to continue the effort, whether financial and/or non-financial, and weaving the intervention into the fabric of everyday operations so that it is part of routine care as opposed to a new add-on (e.g. Step 3 in Table 1). In the long-term, however, interventions must be financially viable. The business case for reducing disparities is evolving and must be viewed from both societal and individual organization/provider perspectives.33–35 From a societal perspective, the business case for reducing disparities centers on direct medical costs, indirect costs, and the creation of a healthy national workforce in an increasingly competitive global economy. Laveist et al. estimate that disparities for minorities cost the United States $229 billion in direct medical expenditures and $1 trillion in indirect costs between 2003 and 2006.36 America’s demographics are becoming progressively more diverse. The United States Census Bureau estimates that by 2050, the Hispanic population will reach 30 %, the black population 13 %, and the Asian population 8 %.37 Thus, from global and national economic perspectives, disparities reduction will become increasingly important if we are to have a healthy workforce that can successfully compete in the international marketplace and support the rapidly growing non-working aging population on the Social Security and Medicare entitlement programs. From the perspective of the individual health care organization or provider, the immediate incentives are more complex. Integrated care delivery systems have an incentive to reduce disparities to decrease costly emergency department visits and hospitalizations. Large insurers are incentivized to provide high quality care for everyone to be more competitive in marketing their products to employers with increasingly diverse workforces. However, outpatient clinics and providers in the current, predominantly fee-for-service world, especially those serving the uninsured and underinsured, frequently do not have clear incentives to reduce disparities since the money saved from the prevented emergency department visit or hospitalization does not accrue to them.34 Currently, it is difficult to accurately predict the results of health care reform and efforts to contain the Medicare and Medicaid budgets, but several trends indicate that organizations would be wise to integrate disparities reduction into their ongoing quality improvement initiatives. Major national groups such as the Department of Health and Human Services (HHS), Agency for Healthcare Research and Quality, Centers for Disease Control (CDC), Centers for Medicare and Medicaid Services, and Institute of Medicine have consistently stressed the importance of reducing health care disparities and using quality improvement as a major tool to accomplish this goal.28,38–42 The Affordable Care Act emphasizes collection of race, ethnicity, and language data.20 Private demonstration projects, such as the Robert Wood Johnson Foundation Aligning Forces for Quality Program,7 aim for multistakeholder coalitions of providers, payers, health care organizations, and consumers to improve quality and reduce disparities on regional levels. Intense policy attention has been devoted to accountable care organizations,43 the patient-centered medical home,44 and bundled payments.45 These organizational structures and financing mechanisms emphasize coordinated, population based care that may reduce disparities. Reducing racial and ethnic disparities in care is the right thing to do for patients, and, from a business perspective, health care organizations put themselves at risk if they do not prepare for policy and reimbursement changes that encourage reduction of disparities. We believe that health care organizations and providers would be imprudent if they did not plan for payment and coverage possibilities such as: As outlined in our roadmap, it is critical to create an organizational culture and infrastructure for improving quality and equity. Organizations must design, implement, and sustain interventions based on the specific causes of disparities and their unique institutional environments and patient needs. To be most effective, all of these elements eventually need to be addressed; 24 however, we do not want to encourage paralysis for those who might perceive a daunting set of obstacles to overcome. Instead, our experience has been that it useful for an organization to start working on disparities by targeting whatever step or action feels right to them and is thus a priority.46 Eventually the other steps will need to be addressed, but reducing disparities is often a dynamic process that evolves over time. While more disparities intervention research is needed, we have learned much over the past 10 years about which approaches are likely to succeed. The time for action is now. We would like to thank Melissa R. Partin, PhD, who served as the JGIM Deputy Editor for the six manuscripts in this Special Symposium: Interventions to Reduce Racial and Ethnic Disparities in Health Care. Dr. Partin provided valuable advice and feedback throughout this project. Marshall H. Chin, MD, MPH, and Amanda R. Clarke, MPH, served as the Robert Wood Johnson Foundation Finding Answers: Disparities Research for Change Systematic Review Leadership Team that oversaw the teams writing the articles in this symposium. Support for this publication was provided by the Robert Wood Johnson Foundation Finding Answers: Disparities Research for Change Program. The Robert Wood Johnson Foundation had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, approval, or decision to submit the manuscript for publication. Presented in part at the Society of General Internal Medicine Midwest Regional Meeting, September 23, 2010, Chicago, Illinois; the Society of General Internal Medicine Annual Meeting, May 5, 2011, Phoenix, Arizona; the American Public Health Association Annual Meeting, November 1, 2011, Washington, D.C.; and the Institute for Healthcare Improvement Annual National Forum, December 4, 2011, Orlando, Florida. The authors report no conflicts of interest with this work. Dr. Chin was also supported by a National Institute of Diabetes and Digestive and Kidney Diseases Midcareer Investigator Award in Patient-Oriented Research (K24 DK071933), Diabetes Research and Training Center (P60 DK20595), and Chicago Center for Diabetes Translation Research (P30 DK092949).
<urn:uuid:c82e5701-927c-48f2-bc93-e22ce0b83dc6>
seed
|A service of the U.S. National Library of Medicine®| On this page: Reviewed April 2006 What is the official name of the MITF gene? The official name of this gene is “microphthalmia-associated transcription factor.” MITF is the gene's official symbol. The MITF gene is also known by other names, listed below. Read more about gene names and symbols on the About page. What is the normal function of the MITF gene? The MITF gene provides instructions for making a protein called microphthalmia-associated transcription factor. This protein plays a role in the development, survival, and function of certain types of cells. To carry out this role, the protein attaches to specific areas of DNA and helps control the activity of particular genes. On the basis of this action, the protein is called a transcription factor. Microphthalmia-associated transcription factor helps control the development and function of pigment-producing cells called melanocytes. Within these cells, this protein also controls production of the pigment melanin, which contributes to hair, eye, and skin color. Melanocytes are also found in the inner ear and play an important role in hearing. Additionally, microphthalmia-associated transcription factor regulates the development of specialized cells in the eye called retinal pigment epithelial cells. These cells nourish the retina, the part of the eye that detects light and color. Some research indicates that microphthalmia-associated transcription factor also regulates the development of cells that break down and remove bone (osteoclasts) and cells that play a role in allergic reactions (mast cells). Microphthalmia-associated transcription factor has a particular structure with three critically important regions. One region, known as the basic motif, binds to specific areas of DNA. Other regions, called the helix-loop-helix motif and the leucine-zipper motif, are critical for protein interactions. These motifs allow molecules of microphthalmia-associated transcription factor to interact with each other or with other proteins that have a similar structure. These interactions produce a two-protein unit (dimer) that functions as a transcription factor. Does the MITF gene share characteristics with other genes? The MITF gene belongs to a family of genes called bHLH (basic helix-loop-helix). A gene family is a group of genes that share important characteristics. Classifying individual genes into families helps researchers describe how genes are related to each other. For more information, see What are gene families? in the Handbook. How are changes in the MITF gene related to health conditions? Where is the MITF gene located? Cytogenetic Location: 3p14.2-p14.1 Molecular Location on chromosome 3: base pairs 69,739,435 to 69,968,337 The MITF gene is located on the short (p) arm of chromosome 3 between positions 14.2 and 14.1. More precisely, the MITF gene is located from base pair 69,739,435 to base pair 69,968,337 on chromosome 3. See How do geneticists indicate the location of a gene? in the Handbook. Where can I find additional information about MITF? You and your healthcare professional may find the following resources about MITF helpful. You may also be interested in these resources, which are designed for genetics professionals and researchers. What other names do people use for the MITF gene or gene products? See How are genetic conditions and genes named? in the Handbook. Where can I find general information about genes? The Handbook provides basic information about genetics in clear language. These links provide additional genetics resources that may be useful. What glossary definitions help with understanding MITF? acids ; amino acid ; dimer ; DNA ; epithelial ; gene ; hypopigmentation ; leucine ; mast cells ; melanin ; melanocytes ; motif ; pigment ; pigmentation ; protein ; retina ; syndrome ; transcription ; transcription factor You may find definitions for these and many other terms in the Genetics Home Reference Glossary. See also Understanding Medical Terminology. References (11 links)
<urn:uuid:a3d121da-2adf-4328-936e-730ed1b25763>
seed
Psychiatry has begun the laborious effort of preparing the DSM-V, the new iteration of its diagnostic manual. In so doing, it once again wrestles with the task set by Carl Linnaeus, to "cleave nature at its joints." However, these "joints," the boundaries between psychiatric disorders, such as that between bipolar disorder and schizophrenia, are far from clear. Prior versions of DSM followed the path outlined by Emil Kraeplin in separating these disorders into distinct categories. Yet, we now know that symptoms of bipolar disorder may be seen in patients with schizophrenia and the reverse is true, as well. Further, our certainty about the boundary of these disorders is undermined by growing evidence that both schizophrenia and bipolar disorder emerge, in part, from the cumulative impact of a large number of risk genes, each of which conveys a relatively small component of the vulnerability to these disorders. And since many versions of these genes appear to contribute vulnerability to both disorders, the study of common gene variations has raised the possibility that there may be diagnostic, prognostic, and therapeutic meaning embedded in the high degree of variability in the clinical presentations of patients with each disorder. In addition, many symptoms of schizophrenia and bipolar disorder are traits that are present in the healthy population but are more exaggerated in patient populations. To borrow from Einstein, who struggled to reconcile the wave and particle features of light, our psychiatric diagnoses behave like waves (i.e., spectra of clinical presentations) and particles (traditional categorical diagnoses). Although new genetic approaches may revise our current thinking, such as studies of microdeletions, microinsertions, and microtranslocations of the genome, the wave/particle approach to psychiatric diagnosis places a premium on understanding the "real" clustering of patients into subtypes as opposed to groups created to correspond to the current DSM-IV. Latent class analysis is one statistical approach for estimating the clustering of subjects into groups. In their study of 270 Irish families, published in the July 15th issue of Biological Psychiatry, Fanous and colleagues conducted this type of analysis, with subjects clustered into the following groups: bipolar, schizoaffective, mania, schizomania, deficit syndrome, and core schizophrenia. When they divided the affected individuals in the study using this approach, they found four regions of the chromosome that were linked to the risk for these syndromes that were not implicated when subjects were categorized according to DSM-IV diagnoses. Dr. Fanous notes that this finding "suggests that schizophrenia as we currently define it may in fact represent more than one genetic subtype, or disease process." According to John H. Krystal, M.D., Editor of Biological Psychiatry and affiliated with both Yale University School of Medicine and the VA Connecticut Healthcare System: "Their findings advance the hypothesis that the variability in the clinical presentation of patients diagnosed using DSM-IV categories is meaningful, providing information that may be useful as DSM-V is prepared. However, we do not yet know whether the categories generated by this latent class analysis will generalize to other populations." This paper highlights an important aspect of the complexity of establishing valid psychiatric diagnoses using a framework adopted from traditional categorical models. - Fanous et al. Novel Linkage to Chromosome 20p Using Latent Classes of Psychotic Illness in 270 Irish High-Density Families. Biological Psychiatry, 2008; 64 (2): 121 DOI: 10.1016/j.biopsych.2007.11.023 Cite This Page:
<urn:uuid:31e91eb3-e2c3-4481-a89e-c3e93d199343>
seed
Interview conducted by April Cashin-Garbutt, BA Hons (Cantab) What is LDL cholesterol? What blood level of LDL cholesterol is considered optimal and why are high levels of LDL cholesterol a key marker of death risk from heart disease? Cholesterol is a lipid that is both produced in the liver and gained through food intake. Some amount of cholesterol, which is transported through the bloodstream in lipoproteins, is essential for normal body function. There are different types of lipids or fats, including low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides. While HDL (“good”) cholesterol is carried from parts of the body to the liver, which removes the cholesterol from the body, high levels of LDL (“bad”) cholesterol remain in the bloodstream and can cause arterial clogging, increasing the risk of stroke and heart disease. Blood lipid levels are the primary biomarkers for cardiovascular disease, which accounts for one in every three deaths in America. Every 10 mg/dL decline in LDL cholesterol is associated with an approximately 5-13% decline in major vascular disease events, such as strokes and mortality. LDL cholesterol levels of 100 mg/dL or lower are considered optimal by the American Heart Association, while LDL cholesterol levels of 100-129 mg/dL are considered near or above optimal, 130 to 159 mg/dL is borderline high, 160 to 189 mg/dL is high, and 190 mg/dL is considered very high. To support heart health, it is very important to maintain the optimal LDL cholesterol levels. Treatments typically include lifestyle modification and may include therapy with lipid-lowering medications such as statins. How have LDL cholesterol blood levels changed over the past several decades? Average blood cholesterol values, the primary cardiovascular disease biomarker, have declined in the United States since at least 1960. Results of three National Health and Nutrition Examination Surveys (NHANES) of nearly 40,000 patients for the years 1988 to 2010 demonstrated that LDL cholesterol levels have declined in the United States while the use of lipid-lowering medications has increased. These trends are also reflected in the mortality rates attributable to cardiovascular disease, which declined by approximately 60% from 1970 through 2000, and by 30% from 2000 through 2009. These improvements are due largely to increased use of evidence-based medical therapies, such as statins, which lower lipid levels, as well as lifestyle changes, such as diet and exercise. Based on these factors, the American Heart Association (AHA) 2020 Strategic Impact Goals target a 20% relative improvement in overall cardiovascular health for all Americans. But as the latest Quest Diagnostics Health Trends’ study suggests, improvements in cholesterol levels may have stalled. When did it come to your attention that the declines in LDL cholesterol blood levels had come to an end? We were not aware of this pattern until we produced our latest Quest Diagnostics Health Trends study. These are studies based on analysis of the company’s diagnostic data. Our study is the first nationally-representative analysis to show that improvements in the United States in LDL cholesterol blood levels, a key marker of death risk from heart disease, abruptly ended in 2008, and may have stalled since. Specifically, we found a 13% decline in the annual mean LDL cholesterol level of the study population over the 11-year period, similar to the NHANES data. However, we also found the decline ended in 2008, and stalled between 2009 and 2011, the last year we studied. The peer-reviewed, open access journal PLOS ONE published the study in May 2013. What sparked researchers at Quest Diagnostics to investigate this sudden end to LDL cholesterol blood level declines? A team of researchers at Quest Diagnostics was inspired to perform the study after NHANES published its data showing declining blood cholesterol values from 1999 through 2010. As we began our analysis, we had no pre-existing theories regarding trends in LDL cholesterol levels; in fact, we assumed we might find a continuation of the same trends that had occurred over the last fifty years. The finding that LDL cholesterol levels have plateaued since 2008 is novel. What did the study involve? Our study examined de-identified low-density lipoprotein blood-serum cholesterol test results of nearly 105 million individual adult patients of Quest Diagnostics of both genders in all 50 states and the District of Columbia from 2001-2011. The study is the largest of LDL cholesterol levels in an American population, and the first large-scale analysis to include data from recent years 2009-2011. Other studies that have examined population trends in LDL cholesterol have been constrained by smaller populations, shorter study periods, and smaller geographical coverage. Our study reported data annually whereas most recently published studies, such as the NHANES research, report results in time periods that cover multiple years, which may mask the plateau observed in our study. In addition to finding that LDL levels stalled, did your study provide any other notable insights? Yes, we found differences between men and women. Specifically, we found a slightly greater decline in LDL cholesterol levels among men compared to women. These differences may reflect meaningful differences in the prescription rate and effectiveness of lipid-lowering interventions, including statins and lifestyles, between genders. The differences may also be due in part to under-appreciation of heart disease risk in women. Medical understanding of differences in heart disease risks by gender is relatively new. For instance, female-specific American Heart Association guidelines for women were introduced only in 1999. More investigation is needed to understand the reasons for the gender differences. What hypotheses were put forward as the reasons behind this trend? It’s reasonable to hypothesize that the economic recession, which began at about the same time that LDL cholesterol values flattened in our study, possibly played a role in the plateau of LDL cholesterol levels. Patients dealing with financial constraints may have been less inclined to visit their physician or use their medications at full dose, limiting access to and effectiveness of treatment. Individuals may also have experienced changes in stress levels, diet, sleep and other behaviors, due to the poor economy, which in turn may have adversely impacted lipids. It’s also possible that statins users in the study may have reached the maximum therapeutic-threshold level or that increases in obesity prevalence or other co-morbid factors during the 11 years of the study period contributed to the LDL cholesterol plateau. Analysis of these theories falls outside the purview of our study, but we believe they warrant additional investigation. What can be done to reverse this trend? We hope this new study will encourage additional population research to inform public health efforts. But we also believe the study should prompt individual patients to be vigilant about practicing healthy behaviors and lipid-lowering treatment plans. The most important lesson to be gleaned from our study is that patients need to remain engaged in their health care and to communicate with their physicians. Given the high mortality rate from cardiovascular disease, this is especially important with heart health. If economic or other factors will potentially affect the ability of patients to maintain a consistent treatment regimen, they should talk freely, honestly and without embarrassment to their clinician regarding all possible options. Our hope is physicians and patients will have more productive conversations about the importance of LDL control to cardiovascular health as a result of this study. What are Quest Diagnostics’ plans for the future? Quest Diagnostics is focused on developing and offering diagnostic innovations along a continuum of care. We are particularly interested in diagnostics that can help prevent or arrest disease – that is, diagnostic services that can help identify risk factors for disease, thereby potentially helping physicians to prevent its onset, or disease in early treatable stages. Certainly, this Quest Diagnostics Health Trends study speaks to the need for the medical community and patients to be vigilant in taking steps to identify heart health risks before disease occurs. The prevention of disease is always the optimal outcome. Where can readers find more information? Please visit our website at www.QuestDiagnostics.com or access the study at www.QuestDiagnostics.com/HealthTrends About Dr. Harvey Kaufman Harvey W. Kaufman, M.D., is Senior Medical Director for Quest Diagnostics and the company’s Medical Director for its General Health and Wellness business. He is also the principal medical investigator for Quest Diagnostics Health Trends studies, and has served in a variety of roles for the company for more than 20 years. Dr. Kaufman graduated from Massachusetts Institute of Technology (S.B. and S.M.), Boston University School of Medicine (M.D.), and New York University's Leonard N. Stern School of Business (M.B.A. with Distinction). Dr. Kaufman is board certified in Anatomic and Clinical Pathology and Chemical Pathology. He serves on various national and local organizations, including the Quest Diagnostics Foundation.
<urn:uuid:3515e6b7-60bb-42f3-a83e-1becad6acb23>
seed
|Home | About | Journals | Submit | Contact Us | Français| The thalassemias are among the most common genetic disorders worldwide, occurring more frequently in the Mediterranean region. The aim of this study was to determined frequency of sensory-neural hearing loss in major ß- thalassemias transfusion dependent patients in south of Iran. A cross sectional study on 308 cases of major beta-thalassemia patients referring to Thalassemia Center of Shiraz University of Medical Sciences between 2006–2007 years. The diagnosis of ß- thalassemia major was based on clinical history, complete blood count and hemoglobine electrophoresis. Clinical data such as serum ferritin level, deferoxamine (DFO) dose, mean daily doses of DFO (mg/kg) and audiometric variables was recorded. Out of 308 cases, 283 (96.5%) had normal hearing and 10 (3.5%) sensorineural hearing loss. There was no statically significant difference between two groups regarding mean age, weight, age at the first blood transfusion, age at the first DFO infusion. We found the lowest incidence of sensorineural hearing loss in a large population of patients suffered from major thalassemia who received DFO. We show that DFO is not ototoxic at a low dose. When considering all related literature, as a whole there has been much critical misrepresentation about DFO ototoxicity. The thalassemias are among the most common genetic disorders worldwide, occurring more frequently in the Mediterranean region, the Indian subcontinent, Southeast Asia, and West Africa. Some authors found that about 20%–29% of cases suffering from sensory- neural hearing loss (SNHL).They proposed deferoxamine (DFO) gives rise to SNHL[1, 2]. However others have challenged this idea and believed that the incidence of SNHL in β-thalassemias is not higher than general population[3, 4]. Injection of 600 mg DFO/kg per day for 30 days in guinea pigs, increased auditory thresholds and loss of inner ear hair cells. In contrast, no effect on auditory function had been found in studies of chinchilla and mice[6, 7]. In a research by Ryals et al., in the experimental quail, DFO was injected daily for 30 days at either 750 mg/kg or 300 mg/kg body weight. These dosages were above the limits considered potentially ototoxic in humans. Then the morphology of the supporting cells and hair cells was studied. At the higher dose of deferoxamine, morphological changes were intensified and began to extend to hair cells. They perceived that DFO clearly has the potential to cause damage to the avian inner ear. Ryals et al.'s study suggests that high doses and prolonged administration of the drug are required for this toxicity to be observable. Because of these controversies and large variability in incidence of SNHL in these patient in the English literatures and because Fars province is a place with high prevalence of the thalassemias in Iran it is worth studying the prevalence of SNHL in relatively large and adequate population of these patients, in order to provide hearing monitoring protocols for this population in the era of managed care. We undertook cross sectional study on 308 cases of major beta-thalassemia patients referring to Thalassemia Center of Shiraz University of Medical Sciences between 2006–2007 years. The study was approved by Shiraz Medical Sciences ethics committee and written consent was taken before starting the study. Exclusion criteria were: cases with past-history of ear operation (such as tympanomastoidectomy, myringotomy and ventilation tube); individuals exposed to ototoxic medication except DFO; cases with preexistence hearing loss and abnormal physical exams (such as chronic otitis media; otitis media with effusion, myringosclerosis). The diagnosis of ß- thalassemia major was based on clinical history, complete blood count and hemoglobine electrophoresis. All enrolled patients underwent an otolaryngological visit, microscopic otoscopy. Clinical data such as serum ferritin level, DFO dose, mean daily doses of DFO (mg/kg), mean of serum ferritin level in last 3 years, volume of transfusion of pack cell, mean hemoglobin and hearing status were recorded in a specially formatted questionnaire. Variables that used for evaluation of hearing status were pure tone air and bone conduction thresholds of 250–8000 Hz, speech discrimination threshold (SDS) and speech reception threshold (SRT). Normal hearing was defined as being between 0 and 20 decibels (dB), and ototxicity as a hearing loss of 20 dB or more at two or more adjacent frequencies. Statistical Analysis was performed using statistical analysis software SPSS software version 11.5. The descriptive variables such as mean, median, standard deviations were used. Chi Square was performed for compare of information about a group of patients with hearing loss and without hearing loss. The P value less than 0.05 was considered significant. From total 308 cases of major beta-thalassemia, 15 cases were excluded from the study due to abnormal otologic history or physical examination. Regarding otologic history, there were 4 cases with history of ear operation (two cases with tympanomastoidectomy and one case with myringotomy and ventilation tube) and one case had positive history of hearing loss since 6 months old. We detected 10 cases with abnormal physical exams: 7 case had otitis media with effusion, and 3 case had myringosclerosis Finally 283 (96.5%) had normal hearing and 10 (3.5%) abnormal hearing. Of these patients, 5 had bilateral symmetric hearing loss and 5 unilateral. There was no statically significant difference between two groups regarding mean age, weight, age at the first blood transfusion, age at the first DFO infusion (Table 1). Prevalence of right ear sensory-neural hearing loss in frequencies of 250, 500, 1000 and 2000 Hz was zero, in 3000 Hz, 0.3%, in 4000 Hz, 1%, and in 8000 Hz, 2.6%. In left ear in frequencies of 250, 500, 1000 Hz was zero, in 2000, 3000, 4000 Hz, 0.3% and in 8000 Hz was 2.3%. There were 7 patients with sensory-neural hearing loss hearing loss in only one frequency and 3 patients (1% of patients) in 2 or more consecutive frequency. Not much was known about the impact of the major β-thalassemia disease and the toxicity of DFO therapy on hearing organ in southern Iran. We found an incidence of only 3.5% SNHL in a large population of patients. In fact conflicting reports and great discrepancy between the incidences of hearing impairment has a long and rich history. It has appeared in the literature during the past 30 years. The first experience of de Virgiliis group in 1975 when they reported high-tone sensorineural hearing loss in 14 of 20 patients with beta-thalassaemia major, and later in 1979 when the same group reported moderate unilateral or bilateral high-tone sensorineural deafness in 43 of 75 patients, however, all patients were receiving chelation therapy with DFO, but de Virgiliis et al did not consider this to be causative. Several authors have studied the ototoxicity of DFO, Some studies have reported frequencies of SNHL between 7.4% and 33%[2, 12]. Despite small sample size in most of the studies, no statistically significant differences were found between the affected and unaffected groups with respect to age, ferritin levels or lengths of time that they had received and dose of DFO, peak DFO dose and iron overload[2, 4, 12–14]. The therapeutic index suggested by Porter et al also was not helpful in predicting risk for ototoxicity. The prevalence of hearing loss in thalassaemia patients in other studies were 25% in Olivieri et al (n=89), 33% in Barratt et al. (n=27), 24% in Porter et al. (n= 37), 15.5% in Argiolu et al (n=308), 27% in Kontzolglulou et al (n=88), 29% in Styles et al (n=28) and 3.4% in our study. In a recent study by Shamsian et al. they found the incidence of 7.4% SNHL in 67 patients suffered from major β-thalassemia who treated with deferoxamine. They defined hearing loss as a hearing threshold more than 15 dB. Although they did not report what frequencies involved specifically, the researchers detected there was no association between serum ferritin level or DFO dosage and hearing loss. Although there is discrepancy in the rates between our study and foregoing reports, the difference may be as a result of our definition for hearing loss, ototoxicity, and exclusion criteria. We reviewed the audiograms of Barratt et al study; three of those patients, had a history of recurrent acute ear infections. In three patients whose hearing loss was only above 6 kHz, bone conduction could not be assessed by the authors. We also reviewed the findings of Porter et al Survey, from all 9 cases, in 5 patients hearing loss was only in one frequency above 6 kHz. Actually in Styles and Vichinsky's study we see that of nine patients with abnormal audiograms in 5 of them one frequency was abnormal. In a research by Karimi et al, 128 patients receiving subcutaneous DFO in doses from 21 to 39 mg/kg/day were studied in 2002. Patients had received their total weekly dose of DFO according to two different methods. The first group had received it on an every other day basis and the second group had received it on 6 days a week. Of the patients in the first group 44.7% had hearing loss in the right ear and 41.8% in the left ear only at 8000 Hz frequency, compared to the second group, 27.8 and 23%, respectively. A significant correlation was found between the dose of drug given at each episode of DFO therapy and hearing loss at the frequency of 8,000 Hz. They concluded that DFO ototoxicity is determined not only by the total amount of the drug given, but also by its maximal plasma concentration. However they reported higher frequency of SNHL than other authors, they considered that hearing loss was significant only at one (8000 Hz) frequency. A retrospective controlled study by Masala et al showed a 12% rate of SNHL in patients with thalassemia treated with DFO. The control group of normal patients showed a 10% rate of SNHL. They found no significant difference between thalassemic patients and controls, and concluded that there were inadequate data for DFO otoxicity. Similar findings were reported by Cohen et al who found that 49 out of 52 patients treated with DFO, had no auditory or visual abnormalities. The lack of ototoxic side effects at lower doses can be considered in good harmony with clinical reports of the low incidence of toxic side effects of DFO[5, 20–23]. In a recent national health survey by Agrawal et al, on the prevalence of hearing loss among US adults, they found in the youngest age group (20–29 years), 8.5% showed high -frequency hearing loss, and the prevalence seems to be growing among this age group. Other authors have agreed with this opinion that the DFO dose generally used (<50 mg kg/ day) is not ototoxic. They report a frequency of hearing loss similar to that in the normal population. Ambrosetti et al in a review of 38 adult patients with thalassemia major support this view since in their patients SNHL was related to neither therapeutic index, nor serum ferritin levels. Furthermore, the percentage of patients with SNHL is similar to that in the normal population of the same age (15–35%). Ambrosetti et al. data suggest that no difference exists between thalassemic patients and non-thalassemic population, and it is adequate to conclude that DFO is not ototoxic. We herein report the lowest incidence of hearing impairment in a large population of patients suffered from major thalassemia who received deferoxamine. We did not find difference between patients with and without SNHL regarding mean age, weight, age at the first blood transfusion and age at the first DFO infusion. We found that desferrioxamine is not ototoxic at a low dose, as a whole there has been much critical misrepresentation and conflicting data about desferrioxamine ototoxicity in literature. This study presents new statistically valid information to physicians to help them a proper decision making regarding otologic problem in such cases. And the authors emphasize that physicians must attempt to clarify the other causes of hearing loss in patients suffered from thalassemia. Therefore hearing monitoring protocol must be structured according to the particular characteristics of each individual patient, such as age, capability to respond to the audiologic tests, and clinical status. This work was supported by a grant from Voice Chancellor for Research of Shiraz University of Medical Sciences and Dr Rooshanzamir for data collation.
<urn:uuid:a7ccd0e6-20b5-445e-82f4-70dbd56a7917>
seed
Rather than testing for individual marker genes or proteins, researchers at the University of California, San Diego (UC San Diego) and the Moores UCSD Cancer Center have evidence that groups, or networks, of interactive genes may be more reliable in determining the likelihood that a form of leukemia is fast-moving or slow-growing. One of the problems in deciding on the right therapy for chronic lymphocytic leukemia (CLL) is that it is difficult to know which type a patient has. One form progresses slowly, with few symptoms for years. The other form is more aggressive and dangerous. While tests exist and are commonly used to help predict which form a patient may have, their usefulness is limited. Han-Yu Chuang, a Ph.D. candidate in bioinformatics and systems biology program in the department of bioengineering in the UC San Diego Jacobs School of Engineering, senior author Thomas Kipps, M.D., Ph.D., professor of medicine and deputy director for research at the Moores UCSD Cancer Center, and their colleagues analyzed the activity and patterns of gene expression in cancer cells from 126 patients with aggressive or slow-growing CLL. The researchers, using complex algorithms, matched these gene activity profiles with a huge database of 50,000 known protein complexes and signaling pathways among nearly 10,000 genes/proteins, searching for "subnetworks" of aggregate gene expression patterns that separated groups of patients. They found 30 such gene subnetworks that, they say, were better in predicting whether a disease is aggressive or slow-growing than current techniques based on gene expression alone. They presented their results Monday, December 8, 2008 at the annual meeting of the American Society of Hematology in San Francisco. "We wanted to integrate the gene expression from the disease and a large network of human protein interactions to reconstruct the pathways involved in disease progression," Chuang explained. "By introducing the relevant pathway information, we can do a better job in prognosis." Chuang, co-author Trey Ideker, Ph.D., professor of bioengineering at UCSD, and their co-workers have previously shown the potential of this method in predicting breast cancer metastasis risk. "When you are analyzing just the gene expression, you are analyzing it in isolation," Chuang explained. "Genes act in concert and are functionally linked together. We have suggested that it makes more sense to analyze the genes' expression in a more mechanistic view, based on information about genes acting together in a particular pathway. We are looking for new markers - no longer individual genes - but a set of co-functional, interconnected genes," she said. "We would like to be able to model treatment-free survival." The current work is "proof of principle," Chuang said. Clinical trials will be needed to validate whether specific subnetworks of genes can actually predict disease CLL progression in patients. She thinks that the subnetworks can be used to provide "small scale biological models of disease progression," enabling researchers to better understand the process. Eventually, she said, a diagnostic chip might be designed to test blood samples for such genetic subnetworks that indicate the likely course of disease. The involved biological pathways could be drug targets as well. The American Cancer Society estimates that, in 2008, there will be about 15,110 new cases of CLL in the United States, with about 4,390 deaths from the disease. Laura Rassenti, Ph.D., UCSD, was also a co-author on the study. The Moores UCSD Cancer Center is one of the nation's 41 National Cancer Institute-designated Comprehensive Cancer Centers, combining research, clinical care and community outreach to advance the prevention, treatment and cure of cancer. For more information, visit www.cancer.ucsd.edu.
<urn:uuid:1616e0f8-9657-4124-92ae-d314570dc1e0>
seed
VANCOUVER – Researchers at St. Paul’s Hospital and Vancouver General Hospital are developing a revolutionary new test to diagnose and facilitate treatment of organ rejection in transplant patients. The $9.1-million Vancouver-based study, called Better Biomarkers of Acute and Chronic Allograft Rejection Project, led by Drs. Bruce McManus, Paul Keown and Rob McMaster, is jointly funded by Genome Canada, Genome BC, Novartis Pharmaceuticals and IBM. The project is believed to be the largest study of its kind ever performed in Canada and will focus on patients who have undergone liver, heart and kidney transplants. The project leaders will make a plenary presentation about their work at the eighth annual British Columbia Transplant Research Day, to be held Thursday, December 9, 2004 at the Chan Auditorium, Children’s and Women’s Health Centre of BC. Patients with end-stage vital organ failure depend on transplantation, but the process has its remaining challenges. Immune cells that normally protect patients can cause rejection and destruction of the very organ intended to save their life. To test for rejection, patients must undergo uncomfortable and invasive biopsies. Patients must also take drugs that inhibit rejection by suppressing the immune response, and which can have serious side effects. Project researchers seek to define which biomarkers—for example, substances found in the blood or other body fluidsΎcan be used as a diagnostic and prognostic test for organ rejection and immunosuppressive therapy response. Being able to monitor and predict rejection using a simple blood test will significantly reduce intrusive and expensive diagnostic procedures. “One of the major problems facing clinical caregivers in the management of organ rejection is determining whether a transplanted organ is undergoing rejection,” says Dr. Bruce McManus of the James Hogg iCAPTURE Centre, based at St. Paul’s Hospital, and co-leader of the project. “Most of the current methods for detecting rejection require tissue biopsies. These procedures may cause emotional and physical discomfort to patients and may result in findings that are inconclusive.” Project co-lead Dr. Paul Keown of the Vancouver Coastal Health Research Institute, VGH site, says, “In order to prevent organ rejection, powerful drugs are used to suppress a patient's immune system. Such therapies reduce the probability that the patient's own body will attack the transplanted organ, but impairing the immune system may leave the patient susceptible to infections and malignancies, and may damage the precious transplanted organs.” Individual patients vary in their response to immunosuppression therapy. It is this variation that project researchers, using the most advanced genomic (study of genes), proteomic (study of proteins) and bioinformatic (information science) tools available, will seek to better understand. “These new tools are critical in order to produce an affordable, accurate, and widely useful test to determine whether rejection is occurring and how a patient’s transplanted organ is faring,” says Dr. Rob McMaster, project co-lead, Director of the Immunity and Infection Research Centre at the Vancouver Coastal Health Research Institute, and Director of Transplant Immunology Research for the BC Transplant Society. Understanding the different responses patients have to immunosuppressive therapy will also help physicians balance the necessity of the therapy with its possible side effects. Personalized therapy could help reduce the enormous economic burden of over-prescribing immunosuppressive drugs. All three co-leaders of the Better Biomarkers of Acute and Chronic Allograft Rejection Project are faculty members at the University of British Columbia. This project is funded for three years by Genome Canada through Genome BC and private sector partners Novartis Pharmaceuticals and IBM. Other partners include Providence Health Care, the Vancouver General Hospital Foundation, St. Paul’s Hospital Foundation, UBC, Genome BC, the James Hogg iCAPTURE Centre, BC Transplant Research Institute and Affymetrix. The research team includes international leaders in transplantation immunology, pathology, biochemistry, genomics, proteomics statistics, information science and clinical care. Cite This Page:
<urn:uuid:61b68b93-f1eb-4ad2-a0c2-aa78b16b21f1>
seed
In computational ethology, the measurement of optokinetic responses (OKR) is an established method to determine thresholds of the visual system in various animal species. Wide-field movements of the visual environment elicit the typical body, head and eye-movements of optokinetic responses. Experimentally, usually regular patterns, e.g. black and white stripes, are moved continuously. Variation of stimulus parameters like contrast, spatial frequency and movement velocity allows to determine visual thresholds. The measurement of eye-movements is the most sensitive method to quantify optokinetic responses, but typically requires the fixation of the head by invasive surgery. Hence the measurement of head-movements is often used alternatively to rapidly measure the behavior of many individuals. While an animal performs these experiments, a human observer decides for each stimulus presentation if a tracking reaction was observed or not . Since responses of the animals typically are not recorded, off-line analysis and the evaluation of other response characteristics is not possible. We developed a method to automatically quantify OKR behavior based on the head movement in small vertebrates. For this purpose, we built a system consisting of a visual 360° panorama stimulation realized by four LCD monitors and a camera, positioned above the animal to record the head movements. A tracking algorithm retrieves the angle of the animal’s head. Here, we present a method for automated detection of tracking behavior based on the difference between the angular velocities of head and stimulus movement. Tracking performance is measured as the amount of time the animal performs head movements corresponding to the stimulus movement for more than 1s. For the optokinetic responses of mice we show that the tracking time decreases with increasing spatial frequency of a sinusoidal stimulus pattern (Fig 1). While a human observer was not able to detect tracking movements for spatial frequencies > 0.44 cyc/deg, the automated method revealed a certain amount of tracking behavior also at higher spatial frequencies. Thus, we were able to increase the sensitivity of the non-invasive measurement of optokinetic head movements into a sensitivity range that formerly required the measurement of eye movements. Figure 1. A: Head movements in response to sinusoidally moving stimuli of two different spatial frequencies. Red: Sequences, which were automatically identified as tracking behavior. B: Automatically identified tracking behavior at different spatial frequencies (blue: median, N=12) in comparison to random head movements in absence of a stimulus (red line: median, dashed: standard deviation) and to the threshold detected by a human observer (green). Supported by German research foundation research unit DFG-FOR701.
<urn:uuid:273ecec9-6e61-49db-b610-9d18266be34f>
seed
Transverse section of a portion of the spleen. (Spleen pulp labeled at lower right.) ||This article needs attention from an expert in Anatomy. (November 2008)| The red pulp of the spleen is composed of connective tissue known as the cords of Billroth and many splenic sinuses that are engorged with blood, giving it a red color. Its primary function is to filter the blood of antigens, microorganisms, and defective or worn-out red blood cells. The spleen is made of red pulp and white pulp, separated by the marginal zone; 76-79% of a normal spleen is red pulp. Unlike white pulp, which mainly contains lymphocytes such as T cells, red pulp is made up of several different types of blood cells, including platelets, granulocytes, red blood cells, and plasma. The splenic sinuses of the spleen, also known as sinusoids, are wide vessels that drain into trabecular veins. Gaps in the endothelium lining the sinusoids mechanically filter blood cells as they enter the spleen. Worn-out or abnormal red cells attempting to squeeze through the narrow intercellular spaces become badly damaged, and are subsequently devoured by macrophages in the red pulp. In addition to aged red blood cells, the sinusoids also filter out particles that could clutter up the bloodstream, such as nuclear remnants, platelets, or denatured hemoglobin. Cells found in red pulp Red pulp consists of a dense network of fine reticular fiber, continuous with those of the splenic trabeculae, to which are applied flat, branching cells. The meshes of the reticulum are filled with blood: - White corpuscles are found to be in larger proportion than they are in ordinary blood. - Large rounded cells, termed splenic cells, are also seen; these are capable of ameboid movement, and often contain pigment and red-blood corpuscles in their interior. - The cells of the reticulum each possess a round or oval nucleus, and like the splenic cells, they may contain pigment granules in their cytoplasm; they do not stain deeply with carmine, and in this respect differ from the cells of the Malpighian corpuscles. - In the young spleen, macrophages may also be found, each containing numerous nuclei or one compound nucleus. - Nucleated red-blood corpuscles have also been found in the spleen of young animals. In lymphoid leukemia, the white pulp of the spleen hypertrophies and the red pulp shrinks. In some cases the white pulp can swell to 50% of the total volume of the spleen. In myeloid leukemia, the white pulp atrophies and the red pulp expands. - Luiz Carlos Junqueira and José Carneiro (2005). Basic histology: text & atlas. McGraw-Hill Professional. pp. 274–277. ISBN 0-07-144091-7. - Michael Schuenke, Erik Schulte, Udo Schumacher, Lawrence M. Ross, Edward D. Lamperti (2006). Atlas of anatomy: neck and internal organs. Thieme. p. 219. ISBN 1-58890-360-5. - Victor P. Eroschenko, Mariano S. H. di Fiore (2008). Di Fiore's atlas of histology with functional correlations. Lippincott Williams & Wilkins. p. 208. ISBN 0-7817-7057-2. - Carl Pochedly, Richard H. Sills, Allen D. Schwartz (1989). Disorders of the spleen: pathophysiology and management. Informa Health Care. pp. 7–15. ISBN 0-8247-7933-9. - Cormack, David H. (2001). Essential histology. Lippincott Williams & Wilkins. pp. 169–170. ISBN 0-7817-1668-3. - Jan Klein, Václav Hořejší (1997). Immunology. Wiley-Blackwell. p. 30. ISBN 0-632-05468-9. - Anatomy Atlases - Microscopic Anatomy, plate 09.175 - "Spleen: Red Pulp" - red+pulp at eMedicine Dictionary - Diagram at kctcs.edu - Description and diagram at apsu.edu
README.md exists but content is empty.
Downloads last month
5