id
stringlengths
47
47
source
stringclasses
2 values
text
stringlengths
19
659k
<urn:uuid:ec198c77-bd15-4095-9273-5dcc4077a179>
seed
Inside this Issue… - What is Staph? - Why Antibiotic Resistance? - Symptoms of Staph & MRSA - How Do You Get Staph/MRSA? - What Can I Do? Preventing infection - When You Suspect a Skin Infection Factsheets & Handouts: Special Edition – November 2007 Editors: Janet M. Pollard, MPH; and Carol A. Rice, Ph.D., R.N. Simple scrapes, cuts, and abrasions are common when kids play, whether it is during athletic activities, at recess, or simply during playtime. However, recent news of infection and even fatality related to antibiotic-resistant staph infection, known as MRSA, has shed light on the need for good hygiene and appropriate care for minor injuries to reduce the risk of infection. Doctors have been seeing an increasing number of patients with skin infections caused by Staphylococcus aurea (‘staph’) bacteria that are resistant to many antibiotics (drugs that kill bacteria), also called methicillin-resistant Staphylococcus aureus – MRSA.1 “Until recently, people most often got MRSA infections in hospitals and nursing homes.”2 “These infections are, however, becoming more common among people who do not have medical problems, including children… We are now seeing more people with MRSA infections in the community, who have not had any contact with hospitals or medical facilities.”2 There have been an increasing number of reports of MRSA from local and regional health departments, physicians, the public, schools, and daycare facilities.3 “Outbreaks of skin infections caused by antibiotic-resistant bacteria [once only found in medical facilities, such as hospitals and nursing home facilities] have been reported in sports teams including wrestling, volleyball, and most frequently, football teams.”4 A recent study published in the Journal of the American Medical Association estimates that MRSA infections occurred in nearly 95,000 Americans in 2005 with an estimated 18,650 people dying due to their MRSA infections.5 Although most of these cases occurred in hospitals, about 15 percent were picked up in public settings.6 The “healthy person in the community—like the high school student—generally is going to be able to be treated adequately without adverse outcome; but the infected individual must seek treatment, cover open cuts or lesions, and avoid direct skin contact with others.”5 If the infection is left untreated, MRSA can “pose a significant health threat.”4 This issue of HealthHints will look at MRSA in our schools, daycare facilities, and homes, addressing how it may be prevented and treated. Staph is a type of bacteria. MRSA is a particular type of bacteria that has become resistant to a specific group of antibiotics called beta-lectams, including penicillin, amoxicillin, oxicillin, and others. So, where do staph bacteria come from? People actually carry staph bacteria on their skin and in their nose without being infected. In fact, 25–30 percent of people are carriers of some staph bacteria, while about 1 percent are carriers of MRSA. Staph bacteria can enter the skin through a cut or scrape or even through a break in dry skin. “Staph and MRSA infections in the community are usually manifested as skin infections, such as pimples or boils, and occur in otherwise healthy people.”7 If left uncared for, however, staph and MRSA can lead to more serious problems, infecting blood and bones.8 Antibiotic resistance occurs when some bacteria have figured out how to outsmart antibiotics.9 “When an antibiotic is taken unnecessarily or improperly, some bacteria can survive. The surviving bacteria develop ways to become stronger and drug resistant. Resistant bacteria can transfer this strength to other more dangerous bacteria.”10 Bacteria inside the body exchange, share, or copy genes that allow them to resist antibiotic treatment.10 The major concerns surrounding antibiotic use are: overuse and misuse. Antibiotics only kill bacteria—NOT viruses. Only a doctor can tell if you have a bacterial or viral infection. Many people have come to believe that an antibiotic will make them feel better when they are sick, regardless of the illness. Actually, however, it is sometimes good to allow the body to fight off an infection on its own, which builds your body’s immunity to that virus. Strep throat is the only kind of sore throat that can be helped with antibiotics; sinus infections, though they share similar symptoms with colds, are the only cold-like infection that can be helped with antibiotics. Antibiotics will not usually help a common cold, sore throat, or bronchitis.9 Misuse occurs when patients do not follow directions for use of their antibiotics. Often, a person will stop taking his/her medication as soon as he/she starts to feel better. Unfortunately, this practice allows the hardiest bacteria to survive and reproduce. Another misuse occurs when a person shares his/her antibiotics or uses left-over pills at the first sign of an illness later on. If the antibiotics are not the right type for the infection or are not needed, resistance can develop.10 Again, not using the full dose can allow the hardiest bacteria to survive and resist future antibiotic use. Proper use (PDF) of antibiotics is crucial in reducing your risk for antibiotic-resistant infections. Because we are seeing more outbreaks of MRSA in community settings, it is becoming increasingly important that we recognize signs and symptoms of staph infection and know how to prevent, handle, and treat such infections. Of particular concern for many parents, teachers, and care providers are the increasing reports of MRSA in schools and daycare facilities. For this reason, we must learn and teach our children how to have good hygiene and deal with cuts, scrapes, and exposed wounds of any kind. Staph bacteria, including MRSA, most often cause skin infections that may look like one of the following: - sores that look and feel like spider bites (MRSA, however, is not caused by spider bites); - large, red, painful bumps on the skin (called boils); - a cut that is swollen, hot, and filled with pus; or - blisters filled with fluid (called impetigo).11 Most staph infections are minor and may be easily treated.12 MRSA, however, can be more difficult to treat and may recur.13 A staph infection of any kind that starts as a skin infection can worsen and cause more serious problems such as pneumonia and infections of the blood stream.12 Anyone can get a staph/MRSA infection.12 You are more likely to get a staph/MRSA infection if you have: - skin-to-skin contact with someone who has a staph/MRSA infection; - contact with items and surfaces that have staph/MRSA on them; - openings in your skin such as cuts or scrapes from injuries, dry skin, etc.; - crowded conditions, particularly crowded living conditions; or - poor hygiene.12 Although staph bacteria are carried on the skin and in the nose, staph infection is not usually transmitted through droplets in the air.3 People often catch a cold when someone sneezes and the virus is transmitted to them through tiny droplets in the air. With staph/MRSA, however, the primary mode of transmission is through direct skin-to-skin contact with someone who has a staph/MRSA infection.7,12 Staph/MRSA infections can rub off the skin of an infected person onto the skin of another person during skin-to-skin contact. Contact with Items & Surfaces. Staph/MRSA can also be transmitted through contact with an inanimate object (e.g., clothing, linens, furniture) that is soiled with drainage from a staph/MRSA-infected wound. In fact, staph/MRSA can live on surfaces and objects for months.11 Openings in the Skin. Staph/MRSA infections often begin with an injury to the skin.1 Direct physical contact of the staph bacteria with a break in the skin (e.g., cut, scrape, or other abrasion) is one reason athletes and children involved in physical activity—where there is a lot of direct skin-to-skin contact and minor open injuries—are of particular concern. Even cracks from dry skin can become a site for bacteria to enter. Antibiotic-resistant skin infections are often found in places where there are crowds of people, such as schools and gyms. Crowded living conditions, including jails, are also places where staph/MRSA is of particular concern. Not keeping your body and your environment clean increases your risk for contracting infection. In fact, keeping your body and environment clean is one of the best ways to prevent infection. There is a lot you can do to prevent and manage staph/MRSA infections. Most of the guidelines for prevention are not hard to do, but they are very important. One of the keys for prevention will be getting this importance across to kids so they will make every effort to follow these guidelines. Guidelines for Prevention: Personal care & hygiene. Wash your hands with warm water and soap. Handwashing is the single most important behavior in preventing staph/MRSA. (See How to Wash Hands [PDF] for important information on this topic.) - Carry or provide alcohol-based hand sanitizer for when soap and water are not available. - Shower or bathe daily with soap and water…and as soon as possible after all direct contact sports or activity. Dry using a clean towel. - Keep fingernails trimmed short (no longer than the tip of the finger). - Use moisturizer to prevent dry, cracked skin. - Do NOT share personal care items including towels (not even on the sidelines of a game), soap, razors, ointment, etc. - Do NOT wear artificial nails. - Do NOT share antibiotics. - Do NOT take antibiotics as a preventative measure for avoiding infection.3,4,11 Cleaning & laundry - Use isopropyl alcohol (available at pharmacies and grocery stores) to disinfect reusable materials, such as scissors or tweezers. - Prewash or rinse items that have been grossly contaminated with body fluids; then, wash clothes for a full cycle in hot water and ordinary detergent, and dry on the hottest cycle. Inform parents of these laundry precautions if laundry is sent home. Note: Laundry should be sent home in an impervious, waterproof container or plastic bag if not done at the facility (e.g., school or daycare). - Wash towels, uniforms, scrimmage shirts, and any other laundry used in athletics. - Do clean daycare facilities and items used by children at least daily using a commercial disinfectant (look for the word disinfectant on the product label) or a fresh (mix daily) solution of 1 part bleach to 100 parts water (1 tablespoon bleach in 1 quart of water). (See EPA-registered products effective against MRSA [PDF].) - Do clean athletic areas and sports equipment at least weekly using a commercial disinfectant (look for EPA-approved, hospital-grade germicide on the product label) or a fresh (mix daily) solution of 1 part bleach to 100 parts water (1 tablespoon bleach in 1 quart of water). (See EPA-registered products effective against MRSA [PDF].)3,4,11 Policy & decision-making - Unless directed by a physician, students with MRSA infections should not be excluded from attending school. Exclusion from school and sports activities should be reserved for those with wound drainage (“pus”) that cannot be covered and contained with a clean, dry bandage and for those who cannot maintain good personal hygiene.14 - Schools should introduce policy in which students must inform the school and particularly athletic trainers if they have a skin infection and in which students will not participate in contact activities until approved by the trainer. Have students and parents sign a release to that effect. - Daycare facilities should introduce policy in which parents inform daycare providers of any skin infection their child may have. - Do NOT allow employees with draining wounds or infections to have physical contact with children.3,4,11 To answer any further questions you might have about MRSA and schools, see Questions and Answers about MRSA in Schools [PDF] from the Centers for Disease Control. Even when precautions are taken to prevent infection, sometimes infection is not avoided. If you suspect a skin infection, the first step to take is to see your doctor immediately. “Early treatment can help prevent the infection from getting worse.”1 If you or a family member has an MRSA infection, you will need to take special steps at home. (See Caring for MRSA at Home [PDF].) Always follow your doctor’s instructions. Your provider may also choose one or more of the following treatments: - Drain the infection. - Give antibiotics. - Reduce the amount of bacteria on your skin or in your nose.11 Draining the Infection. In an effort to get rid of the infection, your provider may recommend draining the pus/fluid from the infected area. You should not drain an infection on your own. Poking or squeezing the infection can push the bacteria deeper into the skin and make the infection worse. Draining an infection should only be done by a trained health care provider. Your provider will open the sore and drain it of fluid; then you must keep it covered until it heals. You will likely be asked to come back for a check-up and to have the dressing changed; or you may be given instructions for changing the dressing at home. (See Caring for MRSA at Home [PDF].) A follow-up visit is usually needed to make sure the area is healing well. Some skin infections will heal after your health care provider has drained the pus out. You may not need antibiotics.11 If you have a MRSA infection, the treatment may be more complicated than a simple staph infection. Methicillin is an antibiotic that represents a group or class of antibiotics. MRSA cannot be treated with antibiotics in this class, such as amoxicillin, penicillin, oxicillin, Augmentin, dicloxacillin, and others including cephalosporins. Depending on antibiotic resistance patterns, however, alternative antibiotics, such as trimethoprim/sulfamethoxazole (e.g., Bactrim, Septra) minocycline, clindamycin, or newer antibiotics may be considered.3 These alternative antibiotics may be able to make the MRSA infection go away;11 however, they may also be rendered ineffective through the development of antibiotic resistance3 if antibiotics are misused. Thus, MRSA treatment may be longer, more expensive, more complicated, and infections can reappear frequently.15 That’s why it is so important to take your antibiotics as directed by your provider—completing the full dose to kill the hardiest bacteria and reduce your risk for antibiotic resistance of these newer classes of antibiotics. “If your provider gives you antibiotics, take them exactly as prescribed. Do not stop early, even if you feel better. The last few pills kill the toughest germs. Never take antibiotics without a prescription from your health care provider [emphasis added].”11 Reducing the Bacteria. To decrease the amount of bacteria on the skin and in the nose, antibacterial soaps and antibiotic ointments may be recommended to prevent further spread of the infection to others. Though antibacterial soaps are not recommended on a regular basis, in some instances, your provider may recommend their use for a short time to reduce the amount of bacteria on your skin. Additionally, your provider may recommend using a small amount of antibiotic ointment in the nostrils for a few days.11 Neither of these options should be used without the consent and specific instructions of your provider. Seek further medical care if: - you have new symptoms during treatment, - your infection does not get better, - your infection gets worse, or - your infection comes back.11 Whether at home or in public places, staph and MRSA infections should be treated carefully to reduce the risk of spreading the infection and increase chances for healing with fewer complications. There are many excellent resources available if you need more information. Take a look at these resources [PDF] (English and Spanish) for more help regarding MRSA and antibiotic-resistant staph. English and Spanish Resources on Antibiotic-Resistant Staph & MRSA - What You Need to Know about Staph/MRSA Skin Infections (PDF) [Spanish PDF] - What to Do about Your Skin Infection (PDF) [Spanish PDF] - Living with MRSA (PDF) [Spanish PDF] - Taking Care of Wounds that Are Draining or Have Not Healed (PDF) [Spanish PDF] - DSHS Issues Staph Infection Prevention Guidelines - Questions and Answers about Methicillin-Resistant Staphylococcus aureus (MRSA) in Schools - Prevention and Containment of Staphylococcal Infections in Communities (PDF) - Information on Staphylococcal Infections B School Athletic Departments: Instructions for the Athlete (PDF) - Athletes – What to Do about Your Skin Infection (PDF) - Information on Staphylococcal Infections – Day Care Facilities: Instructions for the Parents (PDF) - Unlock Your Skin’s Health (PDF) - Germs, Your Skin & Your Health: Do Not Spread Germs in the Laundry (PDF) - Germs, Your Skin & Your Health: Taking Care of Wounds that Are Draining or Have Not Healed (PDF) - Resistencia Bacteriana a los Antibióticos This document is meant for educational purposes only and is not intended to replace the advice of your doctor or other health care provider. - Texas Department of State Health Services (2006). What you need to know about staph/MRSA skin infections [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ - Tacoma-Pierce County Health Department (2006). Methicillin-resistant Staphylococcus aureus (MRSA) [on-line]. Retrieved October 6, 2006. From http://www.tpchd.org/page.php?id=12. - Texas Department of State Health Services (2006). Information on staphylococcal infections for day care administrators and care givers [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ Department of State Health Services (2006). Information on staphylococcal infections for school athletic departments [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ M. and Langmaid, T. (2007). Bacteria that killed Virginia teen found in other schools [on-line]. Retrieved October 24, 2007. From http://www.cnn.com/2007/HEALTH/10/18/mrsa.cases/ C. (2007) CNN student news transcript [on-line]. Retrieved October 24, 2007. From http://www.cnn.com/2007/LIVING/studentnews/10/17/ - Centers for Disease Control and Prevention (2005). Community-associated MRSA information for the public [on-line]. Retrieved October 5, 2005. From http://www.cdc.gov/ncidod/dhqp/ar_mrsa_ca_public.html. - Texas Department of State Health Services (2006). School staph infection notification letter to parent or guardian [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/educational/. Department of State Health Services (2006). Get smart about antibiotics: A guide for parents [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ - Texas Department of State Health Services (2006). Antibiotic resistance – Questions and Answers [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ educational/AntibioticResistanceQA_%20Edu_Flyer.pdf - GroupHealth Cooperative, Tacoma-Pierce County Health Department, & Washington State Department of Health (2006). Living with MRSA. Retrieved October 6, 2006 [on-line]. From http://www.tpchd.org/files/library/3550750db4a81b14.pdf. - Centers for Disease Control and Prevention (2005). Have you been diagnosed with Staphylococcus aureus or MRSA infection? [on-line]. Retrieved October 5, 2006. From http://www.cdc.gov/ncidod/dhqp/pdf/ar/MRSAPatientInfoSheet.pdf. Department of State Health Services (2006). Information on staphylococcal infections B Day care facilities: Instructions for the parents [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ - Centers for Disease Control (2007). Questions and answers about Methicillin-Resistant Staphylococcus Aureus (MRSA) in schools [on-line]. Retrieved October 24, 2007. From http://cdc.gov/Features/MRSAinSchools/#q9. Department of State Health Services (2006). Information on staphylococcal infections B School athletic departments: Instructions for the athlete [on-line]. Retrieved October 5, 2006. From http://www.dshs.state.tx.us/idcu/health/antibiotic_resistance/ Last updated: 31 October, 2013 Educational programs of the Texas A&M AgriLife Extension Service are open to all people without regard to race, color, sex, religion, national origin, age, disability, genetic information or veteran status.
<urn:uuid:f9bbcbc4-3fde-4854-ad4b-fa0d604446f6>
seed
Bone marrow cells hand natural killer cells their license to attack dangerous invaders La Jolla, CA – A collaboration between scientists at the Salk Institute for Biological Studies and the Pasteur Institute in Paris has uncovered the molecular signals that trigger maturation of natural killer cells, an important group of immune system cells, into fully armed killing machines. Their findings will be published in a forthcoming issue of Nature Immunology. Born to kill, natural killer cells are constantly on the prowl for potentially dangerous invaders, ready to unleash their deadly arsenal at a moment's notice. Prior to the study, scientists were familiar with the diverse repertoire of surface molecules that helps natural killer cells distinguish friend from foe, but how they acquired their reconnaissance tool kit had remained unclear. "We suspected that an environmental signal triggered the differentiation of immature natural killer cells into cells that could recognize and kill invading pathogens," says one of the senior authors, Greg Lemke, Ph.D., a professor in the Salk's Molecular Neurobiology Laboratory, "but we didn't know what it was." When co-senior author Claude Roth, Ph.D., an immunologist at the Institut Pasteur, discovered that low levels of a protein called Axl, which belongs to a class of molecules collectively known as receptor tyrosine kinases, correlated with diminished killer activity in natural killer cells, he turned to Lemke. Lemke's lab had studied the effects of deleting or "knocking out" the Axl gene and its two cousins Mer and Tyro3, sometimes referred to as the Tyro3 family, for over a decade. Although the Salk scientists had been initially interested in how a missing Tyro3 protein impacted brain development, they found that mice lacking all three Tyro3 genes developed autoimmune diseases closely resembling the perplexing symptoms observed in human autoimmunity. According to Lemke, they couldn't help noticing that the Tyro3 "knock-out" animals were very sick and prone to infections, which – now that we know that their natural killers were compromised – makes perfect sense. As part of the innate arm of the immune system, natural killer cells are the body's immediate line of defense, keeping invaders in check until T and B cells of the immune system, which take a few days to mobilize, kick into full gear. Natural killer cells are armed with enzyme-filled sacs that spill their deadly contents when infected or cancerous cells cross the killer's path. In addition, they secrete cytokines, chemical messengers that jumpstart the T and B cell response. What the Salk and Pasteur teams discovered is that when all three Tyro3 proteins are missing, natural killer cells are still armed with their arsenal of enzymes and cytokines, but they can't dip into their weapons cache because they lack the full spectrum of surface molecules that gives them the "license to kill". "From these data it was clear that Tyro3 receptor kinases transmit the environmental signals, which we knew are crucial for the maturation of precursor cells," says Lemke. Receptor tyrosine kinases normally receive signals from a cell's environment and, upon activation, add a phosphate group to intracellular proteins, initiating a new repertoire of cellular behaviors. For natural killer cells those signals – two well-established ligands of Tyro3 proteins called Gas6 and protein S - are secreted by bone marrow stromal cells, which form the local support network for natural killer cell precursors constantly generated in the bone marrow. As the immature natural killer cells get ready to move out of the bone marrow, stromal cells give them the go ahead to acquire the full spectrum of surface receptors, allowing them to attack with discrimination rather than raw determination. In addition to Drs. Roth and Lemke, researchers contributing to this study include co-first author Anouk Caraux, Ph.D., and James P. Di Santo, Ph.D., both at the Institut Pasteur, Salk staff scientist and co-first author Qingxian Lu, Ph.D., Nadine Fernandez, Ph.D., formerly a postdoctoral researcher at the University of California at Berkeley and now at Laboratoire Français du Fractionnement et des Biotechnologies (LFB) in France, and David H. Raulet, Ph.D., a professor at the University of California at Berkeley. The Salk Institute for Biological Studies in La Jolla, California, is an independent nonprofit organization dedicated to fundamental discoveries in the life sciences, the improvement of human health and the training of future generations of researchers. Jonas Salk, M.D., whose polio vaccine all but eradicated the crippling disease poliomyelitis in 1955, opened the Institute in 1965 with a gift of land from the City of San Diego and the financial support of the March of Dimes.
<urn:uuid:2b55d920-d83b-4efa-9f89-24eebebee4ab>
seed
Items in AFP with MESH term: Hookworm Infections Common Intestinal Parasites - Article ABSTRACT: Intestinal parasites cause significant morbidity and mortality. Diseases caused by Enterobius vermicularis, Giardia lamblia, Ancylostoma duodenale, Necator americanus, and Entamoeba histolytica occur in the United States. E. vermicularis, or pinworm, causes irritation and sleep disturbances. Diagnosis can be made using the "cellophane tape test." Treatment includes mebendazole and household sanitation. Giardia causes nausea, vomiting, malabsorption, diarrhea, and weight loss. Stool ova and parasite studies are diagnostic. Treatment includes metronidazole. Sewage treatment, proper handwashing, and consumption of bottled water can be preventive. A. duodenale and N. americanus are hookworms that cause blood loss, anemia, pica, and wasting. Finding eggs in the feces is diagnostic. Treatments include albendazole, mebendazole, pyrantel pamoate, iron supplementation, and blood transfusion. Preventive measures include wearing shoes and treating sewage. E. histolytica can cause intestinal ulcerations, bloody diarrhea, weight loss, fever, gastrointestinal obstruction, and peritonitis. Amebas can cause abscesses in the liver that may rupture into the pleural space, peritoneum, or pericardium. Stool and serologic assays, biopsy, barium studies, and liver imaging have diagnostic merit. Therapy includes luminal and tissue amebicides to attack both life-cycle stages. Metronidazole, chloroquine, and aspiration are treatments for liver abscess. Careful sanitation and use of peeled foods and bottled water are preventive. Case Studies in International Medicine - Article ABSTRACT: Family physicians in the United States are increasingly called on to manage the complex clinical problems of newly arrived immigrants and refugees. Case studies and discussions are provided in this article to update physicians on the diagnosis and management of potentially unfamiliar ailments, including strongyloidiasis, hookworm infection, cysticercosis, clonorchiasis and tropical pancreatitis. Albendazole and ivermectin, two important drugs in the treatment of some worm infections, are now available in the United States.
<urn:uuid:fc60f085-8e95-4267-b335-6cd67e277221>
seed
The rotator cuff is a group of muscles and tendons that surround the shoulder joint, keeping the head of your upper arm bone firmly within the shallow socket of the shoulder. A rotator cuff injury can cause a dull ache in the shoulder, which often worsens when you try to sleep on the involved side. Rotator cuff injuries occur most often in people who repeatedly perform overhead motions in their jobs or sports. Examples include painters, carpenters, and people who play baseball or tennis. The risk of rotator cuff injury also increases with age. Many people recover from a rotator cuff injury with physical therapy exercises that improve flexibility and strength of the muscles surrounding the shoulder joint. Severe rotator cuff injuries, involving complete tears of the muscle or tendon, may require surgical repair. The pain associated with a rotator cuff injury may: - Be described as a dull ache deep in the shoulder - Disturb sleep, particularly if you lie on the affected shoulder - Make it difficult to comb your hair or reach behind your back - Be accompanied by arm weakness When to see a doctor Seek medical attention if your shoulder pain is severe or if it's remained unchanged for more than a few days. Causes of rotator cuff injuries include: - Falling. Using your arm to break a fall or falling on your arm can bruise or tear a rotator cuff tendon or muscle. - Lifting or pulling. Lifting an object that's too heavy or doing so improperly — especially overhead — can strain or tear your tendons or muscles. - Repetitive stress. Repetitive overhead movement of your arms can stress your rotator cuff muscles and tendons, causing inflammation and eventually tearing. - Bone spurs. An overgrowth of bone can occur on a part of the shoulder blade that protrudes over the rotator cuff. This extra bone can irritate and damage the tendon. The following factors may increase your risk of having a rotator cuff injury: - Age. As you get older, your risk of a rotator cuff injury increases. Rotator cuff tears are most common in people older than 40. - Certain sports. Athletes who regularly use repetitive arm motions, such as baseball pitchers, archers and tennis players, have a greater risk of having a rotator cuff injury. - Construction jobs. Occupations such as carpentry or house painting require repetitive arm motions, often overhead, that can damage the rotator cuff over time. Although resting your shoulder is necessary for your recovery, keeping your shoulder immobilized can cause the connective tissue enclosing the joint to become thickened and tight (frozen shoulder). You'll probably start by seeing your family doctor. If your injury is severe, you might be referred to an orthopedic surgeon. What you can do Before the appointment, you might want to write a list that answers the following questions: - When did you first begin experiencing shoulder pain? - What movements and activities worsen your shoulder pain? - Have you ever injured your shoulder? - Have you experienced any symptoms in addition to shoulder pain? - Does the pain travel down your arm below your elbow? - Is the shoulder pain associated with any neck pain? - Does your job or hobby aggravate your shoulder pain? What to expect from your doctor Your doctor is likely to ask you a number of questions. Being ready to answer them may reserve time to go over any points you want to spend more time on. Your doctor may ask: - Where exactly is the pain located? - How severe is your pain? - What movements and activities aggravate and relieve your shoulder pain? - Do you have any weakness or numbness in your arm? During the physical exam, your doctor will press on different parts of your shoulder and move your arm into different positions. He or she will also test the strength of the muscles around the shoulder and in the arms. In some cases, he or she may recommend imaging tests, such as: - X-rays. Although a rotator cuff tear won't show up on an X-ray, this test can visualize bone spurs or other potential causes for your pain — such as a broken bone. - Ultrasound. This type of test uses sound waves to produce images of structures within your body, particularly soft tissues such as muscles and tendons. - Magnetic resonance imaging (MRI). This technology uses radio waves and a strong magnet, and is excellent at revealing problems in both bones and soft tissues. Conservative treatments — such as rest, ice and physical therapy — often are all that's needed to recover from a rotator cuff injury. If your injury is severe and involves a complete tear of the muscle or tendon, you might need surgical repair. If conservative treatments haven't reduced your pain, your doctor might recommend a steroid injection into your shoulder joint, especially if the pain is interfering with your sleep, daily activities or exercise. While such shots are often helpful, they should be used judiciously as they can contribute to weakening of the tendon. Physical therapy exercises can help restore flexibility and strength to your shoulder after a rotator cuff injury. Surgical options may include: - Bone spur removal. If an overgrowth of bone is irritating your rotator cuff, this excess bone can be removed and the damaged portion of the tendon can be smoothed. This procedure is often performed using arthroscopy, where a fiber-optic camera and special tools are inserted through tiny incisions. - Tendon repair or replacement. Many times, a torn rotator cuff tendon can be repaired and reattached to the upper arm bone. If the torn tendon is too damaged to be reattached to the arm bone, surgeons may decide to use a nearby tendon as a replacement. - Shoulder replacement. Massive rotator cuff injuries associated with severe degenerative joint disease (arthritis) of the shoulder may require shoulder replacement surgery. To improve the artificial joint's stability, an innovative procedure (reverse shoulder arthroplasty) installs the ball part of the artificial joint onto the shoulder blade and the socket part onto the arm bone A minor injury often heals on its own, with proper care. If you think you've injured your rotator cuff, try these steps: - Rest your shoulder. Stop doing what caused the pain and try to avoid painful movements. Limit heavy lifting or overhead activity until your shoulder pain subsides. - Apply ice and heat. Putting ice on your shoulder helps reduce inflammation and pain. Use a cold pack for 15 to 20 minutes every three or four hours. After a few days, when the pain and inflammation have improved, hot packs or a heating pad may help relax tightened and sore muscles. - Take pain relievers. Over-the-counter pain relievers such as ibuprofen (Advil, Motrin IB), naproxen (Aleve) or acetaminophen (Tylenol) may be helpful. If you are at risk of rotator cuff injuries or if you've had a rotator cuff injury in the past, daily shoulder stretches and exercises can help prevent future injury. Most people exercise the front muscles of the chest, shoulder and upper arm, but it is equally important to strengthen the muscles in the back of the shoulder and around the shoulder blade to optimize shoulder muscle balance. Your doctor or a physical therapist can help you plan an exercise routine. Feb. 19, 2014 - DeLee JC, et al. DeLee & Drez's Orthopaedic Sports Medicine: Principles and Practice. 3rd ed. Philadelphia, Pa.: Saunders Elsevier; 2010. http://www.mdconsult.com/books/about.do?about=true&eid=4-u1.0-B978-1-4160-3143-7..X0001-2--TOP&isbn=978-1-4160-3143-7&uniqId=230100505-57. Accessed June 18, 2013. - Frontera WR, et al. Essentials of Physical Medicine and Rehabilitation: Musculoskeletal Disorders, Pain, and Rehabilitation. 2nd ed. Philadelphia, Pa.: Saunders Elsevier; 2008. http://www.mdconsult.com/das/book/body/208746819-6/0/1678/0.html. Accessed June 18, 2013. - Rotator cuff tears. American Academy of Orthopaedic Surgeons. http://orthoinfo.aaos.org/topic.cfm?topic=A00064. Accessed June 18, 2013. - Simons SM, et al. Rotator cuff tendinopathy. http://www.uptodate.com/home. Accessed June 18, 2013. - Firestein GS, et al. Kelley's Textbook of Rheumatology. 9th ed. Philadelphia, Pa.: Saunders Elsevier; 2013. http://www.mdconsult.com/das/book/body/208746819-6/0/1807/0.html. Accessed June 19, 2013. - Laskowski ER (expert opinion). Mayo Clinic, Rochester, Minn. June 28, 2013. - AskMayoExpert. Rotator cuff repair. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2012. - AskMayoExpert. Rotator cuff tendinopathy. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013. - Khan K, et al. Overview of the management of overuse (chronic) tendinopathy. http://www.uptodate.com/home. Accessed June 19, 2013. - Golden AK. Decision Support System. Mayo Clinic, Rochester, Minn. June 21, 2013.
<urn:uuid:502cdbd4-47f3-4d56-9b28-0e61888f87e4>
seed
|http://ghr.nlm.nih.gov/ A service of the U.S. National Library of Medicine®| Beta thalassemia is a blood disorder that reduces the production of hemoglobin. Hemoglobin is the iron-containing protein in red blood cells that carries oxygen to cells throughout the body. In people with beta thalassemia, low levels of hemoglobin lead to a lack of oxygen in many parts of the body. Affected individuals also have a shortage of red blood cells (anemia), which can cause pale skin, weakness, fatigue, and more serious complications. People with beta thalassemia are at an increased risk of developing abnormal blood clots. Beta thalassemia is classified into two types depending on the severity of symptoms: thalassemia major (also known as Cooley's anemia) and thalassemia intermedia. Of the two types, thalassemia major is more severe. The signs and symptoms of thalassemia major appear within the first 2 years of life. Children develop life-threatening anemia. They do not gain weight and grow at the expected rate (failure to thrive) and may develop yellowing of the skin and whites of the eyes (jaundice). Affected individuals may have an enlarged spleen, liver, and heart, and their bones may be misshapen. Some adolescents with thalassemia major experience delayed puberty. Many people with thalassemia major have such severe symptoms that they need frequent blood transfusions to replenish their red blood cell supply. Over time, an influx of iron-containing hemoglobin from chronic blood transfusions can lead to a buildup of iron in the body, resulting in liver, heart, and hormone problems. Thalassemia intermedia is milder than thalassemia major. The signs and symptoms of thalassemia intermedia appear in early childhood or later in life. Affected individuals have mild to moderate anemia and may also have slow growth and bone abnormalities. Beta thalassemia is a fairly common blood disorder worldwide. Thousands of infants with beta thalassemia are born each year. Beta thalassemia occurs most frequently in people from Mediterranean countries, North Africa, the Middle East, India, Central Asia, and Southeast Asia. Mutations in the HBB gene cause beta thalassemia. The HBB gene provides instructions for making a protein called beta-globin. Beta-globin is a component (subunit) of hemoglobin. Hemoglobin consists of four protein subunits, typically two subunits of beta-globin and two subunits of another protein called alpha-globin. Some mutations in the HBB gene prevent the production of any beta-globin. The absence of beta-globin is referred to as beta-zero (B0) thalassemia. Other HBB gene mutations allow some beta-globin to be produced but in reduced amounts. A reduced amount of beta-globin is called beta-plus (B+) thalassemia. Having either B0 or B+ thalassemia does not necessarily predict disease severity, however; people with both types have been diagnosed with thalassemia major and thalassemia intermedia. A lack of beta-globin leads to a reduced amount of functional hemoglobin. Without sufficient hemoglobin, red blood cells do not develop normally, causing a shortage of mature red blood cells. The low number of mature red blood cells leads to anemia and other associated health problems in people with beta thalassemia. Changes in this gene are associated with beta thalassemia. Thalassemia major and thalassemia intermedia are inherited in an autosomal recessive pattern, which means both copies of the HBB gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition. Sometimes, however, people with only one HBB gene mutation in each cell develop mild anemia. These mildly affected people are said to have thalassemia minor. In a small percentage of families, the HBB gene mutation is inherited in an autosomal dominant manner. In these cases, one copy of the altered gene in each cell is sufficient to cause the signs and symptoms of beta thalassemia. These resources address the diagnosis or management of beta thalassemia and may include treatment providers. You might also find information on the diagnosis or management of beta thalassemia in Educational resources and Patient support. General information about the diagnosis (http://ghr.nlm.nih.gov/handbook/consult/diagnosis) and management (http://ghr.nlm.nih.gov/handbook/consult/treatment) of genetic conditions is available in the Handbook. Read more about genetic testing (http://ghr.nlm.nih.gov/handbook/testing), particularly the difference between clinical tests and research tests (http://ghr.nlm.nih.gov/handbook/testing/researchtesting). To locate a healthcare provider, see How can I find a genetics professional in my area? (http://ghr.nlm.nih.gov/handbook/consult/findingprofessional) in the Handbook. You may find the following resources about beta thalassemia helpful. These materials are written for the general public. You may also be interested in these resources, which are designed for healthcare professionals and researchers. For more information about naming genetic conditions, see the Genetics Home Reference Condition Naming Guidelines (http://ghr.nlm.nih.gov/ConditionNameGuide) and How are genetic conditions and genes named? (http://ghr.nlm.nih.gov/handbook/mutationsanddisorders/naming) in the Handbook. Ask the Genetic and Rare Diseases Information Center (https://rarediseases.info.nih.gov/gard). anemia ; autosomal ; autosomal dominant ; autosomal recessive ; cell ; chronic ; enlarged spleen ; failure to thrive ; gene ; hemoglobin ; hormone ; inherited ; iron ; jaundice ; mutation ; newborn screening ; oxygen ; protein ; puberty ; recessive ; red blood cell ; screening ; subunit ; thalassemia You may find definitions for these and many other terms in the Genetics Home Reference Glossary. The resources on this site should not be used as a substitute for professional medical care or advice. Users seeking information about a personal genetic disease, syndrome, or condition should consult with a qualified healthcare professional. See How can I find a genetics professional in my area? (http://ghr.nlm.nih.gov/handbook/consult/findingprofessional) in the Handbook.
<urn:uuid:b4ed7c8e-fa9c-467d-86de-891d7867f38a>
seed
Providers should inform patients of radiation exposure with cardiac imaging procedures and doses of more than 20 mSv warrant more extensive discussion, according to recommendations published online Feb. 12 in the Journal of the American College of Cardiology. The recommendations originally came out of a 2012 symposium sponsored by the National Institutes of Health’s National Heart, Lung, and Blood Institute. Symposium participants sought to identify the major elements of an accountability framework that addresses radiation safety issues for the patient, the laboratory and the overall population of people with cardiovascular disease. The document’s authors, led by Andrew J. Einstein, MD, PhD, of Columbia University Medical Center in New York, explained that one of the major recommendations was to inform all patients of radiation exposure. Both the ordering provider and the imaging provider should disclose potential risks from radiation. Doses of 3 mSv or less do not require extensive discussion or written consent, but doses exceeding 20 mSv should involve detailed discussion or written informed consent. Communication with patients, however, should take several factors into consideration. First, most Americans do not easily understand statistics, so providers should avoid too many numbers in their discussions. Among the other communication recommendations were to clarify the difference between baseline cancer risk and cancer risk after exposure to radiation and to use visual aids, such as pictographs. In terms of safety issues in the laboratory, the recommendations focused on the need for providers to know about radiation doses and be aware of ways to lower exposure. Requirements for laboratory accreditation should focus on reducing radiation dose while simultaneously optimizing diagnostic performance, and laboratories should have quality and safety metrics in place. One recommendation was for diagnostic reference levels for a variety of cardiac imaging procedures, a new and time-consuming undertaking. Another recommendation was for laboratories to keep track of radiation safety metrics using a database as a quality performance measure. Some of these data should ultimately be made public. Population-based efforts to reduce radiation exposure should be based on multiple strategies, such as minimizing testing for reasons deemed inappropriate or rarely appropriate. One way to promote more widespread use of such criteria is to incorporate decision-support tools when the provider orders a procedure. The recommendations also included using population-based registries to track radiation. Although the imaging technology currently available allows for noninvasive, routine testing, the authors argued that providers who utilize it must accept responsibility for safe and appropriate use. “The creation of the patient-centered imaging laboratory that prioritizes patient safety and effectiveness will require sizeable changes to the culture of imaging, which now focuses on volume and efficiency,” they wrote.
<urn:uuid:6bee3429-feb3-4903-b335-f72271ae94ca>
seed
disease is an all-inclusive term used to classify diseases that affect the heart and blood vessels. The term cardiovascular disease includes a long list of conditions–from aneurisms to varicose veins. Sometimes a cardiovascular disease is congenital (with you from birth), but many forms of cardiovascular disease develop as you age as a consequence of poor health. Cardiovascular disease is often the result of atherosclerosis. Atherosclerosis is the gradual build up of fat and cholesterol on the walls of the arteries. Atherosclerosis can eventually affect the blood's ability to flow through the arteries. The build up on the arteries, called plaque, can rupture, which can lead to a blood clot and block the flow of blood altogether. This will cause a stroke or a heart attack. Atherosclerosis usually doesn't cause symptoms until it severely narrows or blocks an artery. That's why it is so important to keep a close eye on your blood pressure. Blood pressure is an indicator of atherosclerosis. Severe atherosclerosis can cause a stroke or heart attack, as well as severe damage to other organs throughout your body. A heart attack occurs when a blood clot suddenly cuts off most or all blood supply to a part of the heart. When this happens, the cells of the heart do not receive enough oxygen-rich blood to survive, and they die. A heart attack most often occurs as a direct result of atherosclerosis. Time is essential when experiencing a heart attack. Time that passes without treatment to restore blood flow is more time for cells of the heart to die, and for permanent damage to occur. |One common misconception is that a woman is less likely to experience a heart attack than a man. The truth is that heart attacks don't discriminate. And women need to be able to identify the symptoms of a heart attack immediately, and seek treatment. So, start believing that a heart attack is just as likely to affect you as it is your father, brother or husband in order to be prepared to recognize the symptoms and act in time. According to The American Heart Association, your body will likely send one or more of the following warning signals if you are experiencing a heart attack: - Uncomfortable pressure, fullness, squeezing or pain in the center of the chest that lasts more than a few minutes. - Pain spreading to the shoulders, neck or arms. The pain may be mild or intense. It may feel like pressure, tightness, burning or heavy weight. It may be located in the chest, upper abdomen, neck, jaw, or inside the arms or shoulders. - Chest discomfort with lightheadedness, fainting, sweating, nausea or shortness of breath. - Anxiety, nervousness or cold, sweaty skin. - Paleness or pallor. - Increased or irregular heart rate. - Feeling of impending doom. If you experience any of these symptoms, it's important that you get to the hospital immediately and seek medical attention for a heart attack as soon as possible. Contrary to popular belief, heart failure does not mean that the heart has completely stopped working. Heart failure is the term used to describe the condition when a heart is not working as efficiently as it should. This can be caused by a number of other heart problems, including atherosclerosis, high blood pressure or a past heart attack. Heart failure affects approximately 2.5 million women–and that number is growing. Symptoms of heart failure include: - Shortness of breath - Persistent coughing or wheezing - Buildup of excess fluid in body tissues, also called edema - Loss of appetite - Increased heart rate If you experience any of these symptoms, you should take the right steps toward better heart health. You may also ask your physician about certain medical treatments and the effectiveness of certain drugs in treating heart failure. Heart Valve Disease There are many different types of heart valve disease, which affect the valves that control the flow of blood between the chambers of the heart. Valve diseases on the right side of the heart–resulting from diseases of the pulmonary and tricuspid valve–are rare and are usually caused by congenital heart problems. But valve diseases on the left side of the heart–resulting from diseases of the aortic and mitral valves–are more common and can lead to an accumulation of fluids in the lungs. Valve diseases may be a consequence of narrowed valves, due to atherosclerosis or other damage to the heart. Leaking valves are often due to bacterial infections or inflammation, an enlargement of the heart or mitral Heart valve disease may often go on for years without any signs or symptoms. And while some cases of heart valve disease are not life threatening, others may become severe and carry health risks. It's important to see your doctor regularly and take all of the precautions you can to maintain a healthy heart. Symptoms of more severe heart valve disease include: Stroke is a cardiovascular disease because it originates in the circulatory system, even though it ultimately affects the brain. A stroke occurs when an artery ruptures or is blocked, causing a decrease of blood supply to the brain. Various symptoms may result from this type of blockage, including: - Loss of vision - Speech problems - Severe headache - Asymmetrical drooping in the face Transient Ischemic Attack (TIA) TIA occurs when the symptoms of a stroke appear, but subside. TIA is a stroke warning sign and should also be treated as a medical emergency. If you experience any of the above-mentioned symptoms, you need to treat it as a medical emergency and get to a hospital as soon as possible. There is a short window of time surrounding the onset of these symptoms that is best for treating stroke. Immediately call 911 if you experience any of the following signs of a TIA or stroke: - Sudden numbness or weakness of your face, arms or legs - Sudden numbness in one side of the body - Sudden severe headache - Sudden loss of coordination or trouble walking - Sudden dizziness - Sudden trouble seeing - Sudden confusion - Sudden trouble speaking or understanding Risks and Prevention Stroke is highly preventable. Many of the leading risk factors for stroke can be avoided by maintaining a healthy, active lifestyle. Preexistent cardiovascular disease often leads to stroke. If you have been diagnosed with a cardiovascular disease, talk to your provider about the steps you can take to prevent stroke. Risk factors for stroke include: - Hypertension (high blood pressure) - Heavy alcohol consumption - High cholesterol - Age (over 55) - Family history - Previous stroke - Diet high in salt Many studies have shown a higher incidence of stroke in African-American women than in women of other races. More studies are needed to determine a cause for this correlation. This risk for stroke is significantly increased if you smoke while taking birth control, especially after age 35. There are many, many more variations of heart disease. Many are preventable through maintaining a heart-healthy lifestyle. Some are not preventable. Visit our Diseases and Conditions pages to learn more about individual diseases.
<urn:uuid:807b8a83-453b-42c7-b0ff-1c90e0eadf6e>
seed
Analysis of the motor proteins Cytoskeletal and motor proteins have extensively been studied in the past. They are involved in diverse processes like cell division, cellular transport, neuronal transport processes, or muscle contraction, to name a few [1, 2]. Three kinds of molecular motor proteins have been identified so far: myosins, kinesins, and dyneins (Figure 1). While kinesins and dyneins move on microtubule tracks (blue lines), myosins are the only motors that use the energy of ATP hydrolysis to power movement along actin filaments (red lines). Especially motor proteins consist of large superfamilies. E.g. vertebrates contain up to 60 myosins and about the same number of kinesins that are spread over more than a dozen distinct classes. Kinesins are in general the smallest motor proteins and mostly perform their task as homodimers. Most of the myosins bind one or more calmodulin-like light chains and thus exists as heteromultimers. The cytoplasmatic dynein/dynactin motor protein complex has a size of about 5 MDa and consists of multiple copies of each of the 17 subunits. Figure 1: Schematic representation of an eukaryotic cell, showing the actin (red lines) and microtubule (blue lines) cytoskeleton and different types of the motor proteins. |||RD Vale: The molecular motor toolbox for intracellular transport. Cell 2003, 112:467-80.| |||M Schliwa, G Woehlke: Molecular motors. Nature 2003, 422:759-65.|
<urn:uuid:116e3351-9e5e-4931-b147-7117b3be471c>
seed
Significance and context Although the word genome was coined to refer to the genetic material, the meaning of genomics has widened to cover analysis of both RNA and proteins. Earlier functional genomics studies focused on RNA expression, with high-throughput analysis achieved by the immobilization of myriad cDNAs on a solid surface, such as a microscope slide - a 'DNA chip'. But, as the biological components acting in cell function are mostly proteins, we must study the proteome - the proteins expressed from a genome - to fully understand a living organism. Purified proteins can now be immobilized at high density on flat surfaces to give 'protein chips', which will aid high-throughput characterization of the biochemical and functional properties of cellular proteins. Zhu et al. have expressed 5,800 yeast open reading frames (ORFs), encoding about 80% of the yeast proteome, as fusion proteins (fusions with glutathione-S-transferase, GST, and a histidine tag, His6). The proteins were expressed in yeast, to ensure proper folding and posttranslational modification, and were detergent-extracted. Random protein samples were then analyzed for quality and quantity by immunoblotting. The 5,800 different proteins were printed in duplicate onto glass slides using a commercial microarrayer. The authors decided on nickel-coated slides, which retain the proteins by interacting with the histidine tags. They were able to spot 13,000 protein samples in half the area of a standard glass microscope slide. The resulting yeast protein chip was tested by probing for protein-protein interactions and protein-lipid interactions. On probing with biotinylated calmodulin (a regulatory calcium-binding protein), for example, the authors detected six known calmodulin partners and an additional 33 potentially interacting proteins. Sequence pattern searches revealed that 14 of these 39 polypeptides had a particular protein motif thought to be responsible for their interaction with calmodulin. Four more known calmodulin partners went undetected in this experiment, for unknown reasons. The negative control, using streptavidin alone as the probe, was positive for Pyc1p, a pyruvate carboxylase homolog that contains a conserved biotin-attachment site; this showed that Pyc1 protein is indeed biotinylated in vivo. In looking for protein-lipid interactions, Zhu et al. focused on phosphatidylinositol (PI)-binding proteins. As probes they prepared phosphatidylcholine (PC) liposomes that also included several phosphorylated forms of PIs and a biotinylated lipid. When tested against the protein chip, a total of 150 'above-background' signals were obtained. These included known membrane proteins as well as uncharacterized polypeptides. The authors sorted the responding proteins according to the strength of their interactions and also characterized them by lipid preference and specificity. The construction of a 'proteome chip' is technically feasible, and such chips can be used to study protein-protein and protein-lipid interactions. The authors recognize that their approach, although very useful, has limitations. For example, there is the possibility of indirect interactions due to contaminating peptides co-purifying with the immobilized protein. Given the stringency of the purification process, however, most of the proteins that are printed are presumed to be single polypeptides. The authors also note that properly folded secreted proteins might be under-represented, as the GST sequence and the histidine tag are fused at the amino terminus of the protein sequence, where a signal peptide would be. The system developed by Zhu et al. could be used to study interactions between immobilized proteins and other molecules such as therapeutic agents, if the interactions are strong enough to be detectable and if one can find a way to stabilize the complexes. In future, to give a more realistic picture of interactions, the environment of the immobilized proteins will need to reproduce in vivo conditions better. How, for example, does an integral membrane protein properly fold if surrounded only by detergent, without the lipids that help it attain its normal conformation? There may also be interactions that go undetected under stringent experimental conditions; if protein A interacts only weakly with its partners B and C but more strongly with a BC complex, single interactions will be undetectable unless stabilized by agents able to crosslink the interacting proteins. Might we soon have thousands of well-characterized antibodies directed against every component of a proteome? To compare protein expression in different cellular states, one would simply label the proteins in one state with one fluorochrome and those in the other with another color. Then antigen-antibody interactions will magically deliver a complete picture.
<urn:uuid:82be5248-f731-406c-9f0b-1c7dd5c2c929>
seed
A paper from the National Institutes of Health in the United States has evaluated the separate and combined effects of the frequency of alcohol consumption and the average quantity of alcohol drunk per occasion and how that relates to mortality risk from individual cancers as well as all cancers. The analysis is based on repeated administrations of the National Health Interview Survey in the US, assessing more than 300,000 subjects who suffered over 8,000 deaths from cancer. The research reports on total cancer deaths and deaths from lung, colorectal, prostate, and breast cancers. The overall message of this analysis is that light to moderate alcohol intake does not appear to increase the risk of all-site cancer (and light drinking was shown in this study to be associated with a significant decrease in risk). Similarly, light to moderate consumption was not associated with site-specific cancers of the lung, colorectum, breast, or prostate. As quantity consumed increased from 1 drink on drinking days to 3 or more drinks on drinking days (US drinks are 14g), risk of all-site cancer mortality increased by 22% among all participants. For total alcohol consumption (frequency x quantity), the data indicate a significant reduction in the risk of all-site cancers (RR=0.87, CI 0.80-0.94). Moderate drinking consistently shows no effect in the analysis, and only heavier drinking was associated with an increase in all-site cancer risk. For site-specific cancers, an increase in risk of lung cancer was seen for heavier drinkers, with a tendency for less cancer among light drinkers. There was no evidence of an effect of total alcohol consumption on colorectal, prostate, or breast cancer. The authors excluded non-drinkers in a second analysis in which they used categories of usual daily quantity and of frequency of consumption in an attempt to investigate their separate effects. For all-site cancer and for lung cancer, these results again show an increase in risk only for drinkers reporting greater amounts of alcohol. The data also show an increase in cancer risk from more frequent drinking among women but not among men. For colorectal, prostate, and breast cancer, there is no clear pattern of an increase in risk from quantity of alcohol consumed. For frequency of drinking, again there is a suggestion of an increase in mortality risk with more frequent drinking, although the trends are not statistically significant. Heavier drinking (three drinks or more per occasion) is known to be associated with a large number of adverse health effects, including certain cancers, as was shown in this study. When considering cancer, alcohol consumption should not be considered in isolation, but in conjunction with, other lifestyle behaviours (especially smoking when considering lung cancer). We agree with the authors that both quantity and frequency of consumption need to be considered when evaluating the relation of alcohol to cancer; further, beverage-specific effects need to be further evaluated. More information: Rosalind A. Breslow RA, Chen CM, Graubard BI, Mukamal KJ. Prospective study of alcohol consumption quantity and frequency and cancer-specific mortality in the US population. Am J Epidemiol 2011; DOI: 10.1093/aje/kwr210
<urn:uuid:6479491b-0e2d-4fa8-af8c-d803ec15b385>
seed
Crossed eyes, also called strabismus, occurs when the eyes appear to be misaligned and point in different directions. Strabismus can occur at any age, but is most common in infants and young children. It can be seen in up to 5 percent of children, affecting boys and girls equally. Strabismus can occur part of the time (intermittent) or all of the time (constant). Intermittent strabismus may worsen when the eye muscles are tired — late in the day, for example, or during an illness. Parents may notice their infant's eyes wandering from time to time during the first few months of life, especially when the infant is tired. This occurs because the infant is still learning to focus his or her eyes and to move them together. Most babies outgrow this intermittent strabismus by the age of 3 months. Strabismus can be caused by problems with the eye muscles, with the nerves that control the eye muscles or with the brain, where the signals for vision are processed. Strabismus can accompany some illnesses such as diabetes, high blood pressure, multiple sclerosis, myasthenia gravis or thyroid disorders. Strabismus is classified according to the direction of misalignment. When one eye is looking straight ahead, the other eye may turn inward toward the nose (esotropia or convergent), outward toward the ear (exotropia or divergent), downward (hypotropia) or upward (hypertropia). Esotropia is the most common type of strabismus and appears in several variations: Infantile esotropia is present at birth or develops within the first six months of life. The child often has a family history of strabismus. Although most children with infantile esotropia are otherwise normal, there is a high incidence of this disorder in children with cerebral palsy and hydrocephalus. Many infants appear to have strabismus, but do not. Rather, they have a condition known as pseudostrabismus (or pseudoesotropia), in which a widened nasal bridge or an extra fold of skin makes the white sclera less visible on the nose side of the eye. This gives the appearance that the eyes are crossed. This usually goes away as the infant grows and the facial structures change. Accommodative esotropia is seen in children who are very farsighted. Their eyes cross because of difficulty focusing on nearby objects. Parents notice the child's eyes turning in sometimes, usually when he or she is concentrating on something up close. Accommodative esotropia typically is diagnosed between ages 2 and 3 years. A family history of this condition is common. Strabismus has mistakenly been called lazy eye or amblyopia, which refers to diminished vision in one or both eyes beyond what is expected after correcting any eye problem as fully as possible. However, strabismus can lead to amblyopia. When the eyes are not aligned, the brain receives two different images, resulting in double vision. In young children the visual system has not reached full maturity and the brain is able to suppress the image from one eye to avoid double vision. Amblyopia results if vision from one eye is consistently suppressed and the other eye becomes dominant. Among children with strabismus, one-third to one-half develop amblyopia. Although strabismus may be obvious to the observer, only an eye doctor can confirm the diagnosis of amblyopia.
<urn:uuid:364b7aae-73a8-4eec-8c44-d5d5ddbf34f4>
seed
Stem cells on threads 1. Weaving stem cells into synthetic universal tissue (UK) Embryonic stem cells can survive being spun into polymer threads – a technique that could be used to weave flexible synthetic tissues able to adapt to any transplant environment, say UK biophysicists. The approach could be a step towards the production of artificial organs. They are able weave stem cells into synthetic tissues. There are a number of competing techniques for shaping living cells into custom-made tissue, including one based on inkjet printing, and another that uses air pressure to pull a cell solution into long threads. That technology is able to weave networks of thread containing live brain cells without damaging them. Now, Suwan Jayasinghe's team at University College London has shown that a similar technique can be employed to create threads of embryonic stem cells. The group say this is the first time such cells have been printed using any technique. The team use a technique called electrospraying, where two stainless steel needles, one inside the other, combine a stream of a viscous biodegradable polymer with a suspension of embryonic stem cells. Applying a voltage to the needles charges the polymer and cells and they accelerate towards an "earthed" copper ring a short distance beneath, emerging as a single thin thread. 2. Bioscaffolds with blood flow networks have been made and blood, fat, and bone marrow grew. They are now able to make rejection free, three dimensional organ structures. A novel approach to overcome organ construction obstacles using autologous explanted microcirculatory beds (EMBs) as bioscaffolds for engineering complex three-dimensional constructs. In this study, EMBs consisting of an afferent artery, capillary beds, efferent vein, and surrounding parenchymal tissue are explanted and maintained for 24 h ex vivo in a bioreactor that preserves EMB viability and function. Given the rapidly advancing field of stem cell biology, EMBs were subsequently seeded with three distinct stem cell populations, multipotent adult progenitor cells (MAPCs), and bone marrow and adipose tissue-derived mesenchymal stem cells (MSCs). scientists from Stanford and New York University Langone Medical Center describe how they were able to use a "scaffolding" material extracted from the groin area of mice on which stem cells from blood, fat, and bone marrow grew. This advance clears two major hurdles to bioengineered replacement organs, namely a matrix on which stem cells can form a 3-dimensional organ and transplant rejection. Synthetic tissue and better scaffolds for structure are step towards fully functional artificial organs. Another recent study showed how the artificial capillary networks needed to feed such organs could be grown using cotton candy at very low cost.
<urn:uuid:1c604912-b10d-4505-ac99-b3b05c3afbca>
seed
One of the common birth defects is syndactyly, in which two or more fingers are fused together. Surgical correction involves cutting the tissue that connects the fingers, then grafting skin from another part of the body. (The procedure is more complicated if bones are also fused.) Surgery can usually provide a full range of motion and a fairly normal appearance, although the color of the grafted skin may be slightly different from the rest of the hand. Other common congenital defects include short, missing, or deformed fingers, immobile tendons, and abnormal nerves or blood vessels. In most cases, these defects can be treated surgically and significant improvement can be expected. Syndactyly requires surgical intervention. Full-term infants can be scheduled for elective surgical procedures as early as 5 or 6 months of age. Surgery before this age can increase anesthetic risks. Prior to that time, there is generally no intervention necessary if there are no problems. If there is an associated paronychia which can occur with complex syndactyly, the parents are given instructions to wash the child's hands thoroughly with soap and water and toa apply a topical antibacterial solution or ointment. Oral antibiotics are given when indicated. The timing of surgery is variable. However, if more fingers are involved and the syndactyly is more complex, release should be performed earlier. Early release can prevent the malrotation and angulation that develops from differential growth rates of the involved fingers. In persons with complex syndactyly, the author performs the first release of the border digits when the individual is approximately 6 months old. This approach is used because differential growth rates are observed, particularly between the small finger and ring finger or between the thumb and index finger. Prolonged syndactyly between these digits can cause permanent deformities. If more than one syndactyly is present in the same hand, simultaneous surgical release can be performed, provided only one side of the involved fingers is released. For example, in a 4-finger syndactyly involving the index, long, ring, and small fingers, the index finger can be released from the long finger, and the small finger can be released from the ring finger, leaving a central syndactyly involving the long and ring fingers (see Images 27-28). If both hands are involved, bilateral releases can be performed at one operative setting. Perform bilateral releases whenever feasible to reduce the number of surgeries and the associated risks. Postoperative bilateral immobilization of the upper extremities is well tolerated in the child who is younger than 18 months. The increasingly active child who is older than 18 months has a difficult time with bilateral immobilization. Therefore, in children older than 18 months, any procedures must be staged unilaterally. The remaining syndactyly between the long finger and ring finger can be released approximately 6 months later (see Images 29-30). In an individual with isolated central syndactyly between the long finger and ring finger, the release need not be accomplished until the second year of life because of similar growth rates between the long finger and ring finger. It is preferable to complete all major reconstructions before a child is school age.
<urn:uuid:aaa64d66-5f29-4ddb-a1fe-9c186f0b521f>
seed
Diabetes is a disorder of metabolism, in which the body is unable to regulate its blood glucose levels appropriately. Glucose, a simple sugar, comes from the carbohydrates that you eat. Your body synthesizes and stores glucose, which it then uses as a major source of energy. For glucose to get into cells, insulin, a hormone produced in the pancreas, must be present. In people with diabetes, the pancreas either produces little or no insulin, or the body cells do not respond to the insulin that it produces. As a result, glucose can't get into the cells of the body and glucose levels in the blood become elevated. Over time, the high blood sugar levels damage many organs of the body. Certain factors can increase the risk of developing diabetes. People who have close family members with diabetes and those who are overweight have a greater chance of developing diabetes. Also, the risk of diabetes is increased in some ethnic groups including people who are African-American, Latino American or Native American. Other factors that may increase the risk of diabetes include high blood pressure and hyperlipidemia (elevated cholesterol). Symptoms of high blood sugar include increased thirst and urination, blurred vision, fatigue, and weight loss. In some people, the elevated blood sugar may lead to recurrent infections such as urinary tract infection, vaginal yeast infection or skin infections. However, many people with diabetes may go for many years without symptoms. For that reason, it is recommended that all adults age 45 and older should be tested for diabetes every three years. People with diabetes are at risk for complications that may affect the eyes, kidneys, nerves and circulatory system. Managing diabetes requires that each patient establish therapy goals that include target blood sugar range, weight management, and dietary and lifestyle changes. Foot Care for People with Diabetes A common and costly complication of diabetes, foot ulcers, can easily be prevented through self-examination and proper foot care. When left untreated, however, foot ulcers can lead to infection, gangrene and lower limb amputation. Most often the result is minor foot trauma and wound-healing failure. Diabetes-related amputation accounts for 51% of all amputations in the United States. Because of poor circulation and nerve damage to the feet, people with diabetes are more likely to develop infections even from a minor foot injury. For this reason, people with diabetes should treat their feet with special care. Diabetic Eye Disease Diabetic eye disease refers to a group of eye problems that people with diabetes may face as a complication of the disease. All can cause severe vision loss or even blindness. The most common form of diabetic eye disease is diabetic retinopathy. Diabetic retinopathy is a leading cause of blindness in adults and nearly half of people with diabetes will develop some degree of this disease during their lifetime. It is caused by changes in the blood vessels of the retina that can lead to blindness. If you have diabetes, you should have your eyes examined at least once a year. Your eyes should be dilated during the exam so that your ophthalmologist can see more clearly the insides of the eye to detect signs of the disease. Diabetic eye disease can be treated.
<urn:uuid:f8e9b3cf-9665-4cab-8b5e-2e0595c1b978>
seed
- Health Library - Research a Disease or Condition - Lookup a Symptom - Learn About a Test - Prepare for a Surgery or Procedure - What to do After Being Discharged - Self-Care Instructions - Questions to Ask Your Doctor - Nutrition, Vitamins & Special Diets Osteogenesis imperfecta is a condition causing extremely fragile bones. Brittle bone disease Osteogenesis imperfecta (OI) is a congenital disease, meaning it is present at birth. It is frequently caused by defect in the gene that produces type 1 collagen, an important building block of bone. There are many different defects that can affect this gene. The severity of OI depends on the specific gene defect. OI is an autosomal dominant disease. That means if you have one copy of the gene, you will have the disease. Most cases of OI are inherited from a parent, although some cases are the result of new genetic mutations. A person with OI has a 50% chance of passing on the gene and the disease to their children. All people with OI have weak bones, which makes them susceptible to fractures. Persons with OI are usually below average height ( short stature). However, the severity of the disease varies greatly. The classic symptoms include: - Blue tint to the whites of their eyes (blue sclera) - Multiple bone fractures - Early hearing loss (deafness) Because type I collagen is also found in ligaments, persons with OI often have loose joints (hypermobility) and flat feet. Some types of OI also lead to the development of poor teeth. Symptoms of more severe forms of OI may include: - Bowed legs and arms - Scoliosis (S-curve spine) Exams and Tests OI is usually suspected in children whose bones break with very little force. A physical examination may show that the whites of their eyes have a blue tint. A definitive diagnosis may be made using a skin punch biopsy. Family members may be given a DNA blood test. If there is a family history of OI, chorionic villus sampling may be done during pregnancy to determine if the baby has the condition. However, because so many different mutations can cause OI, some forms cannot be diagnosed with a genetic test. The severe form of type II osteogenesis imperfecta can be seen on ultrasound when the fetus is as young as 16 weeks. There is not yet a cure for this disease. However, specific therapies can reduce the pain and complications associated with OI. Bisphosphonates are drugs that have been used to treat osteoporosis. They have proven to be very valuable in the treatment of OI symptoms, particularly in children. These drugs can increase the strength and density of bone in persons with OI. They have been shown to greatly reduce bone pain and fracture rate (especially in the bones of the spine). Low impact exercises such as swimming keep muscles strong and help maintain strong bones. Such exercise can be very beneficial for persons with OI and should be encouraged. In more severe cases, surgery to place metal rods into the long bones of the legs may be considered to strength the bone and reduce the risk of fracture. Bracing can also be helpful for some people. Reconstructive surgery may be needed to correct any deformities. Such treatment is important because deformities (such as bowed legs or a spinal problem) can significantly affect a person's ability to move or walk. Regardless of treatment, fractures will occur. Most fractures heal quickly. Time in a cast should be limited since bone loss (disuse osteoporosis) may occur when you do not use a part of your body for a period of time. Many children with OI develop body image problems as they enter their teenage years. A social worker or psychologist can help them adapt to life with OI. How well a person does depends on the type of OI they have. - Type I, or mild OI, is the most common form. Persons with this type can live a normal lifespan. - Type II is a severe form that is usually leads to death in the first year of life. - Type III is also called severe OI. Persons with this type have many fractures starting very early in life and can have severe bone deformities. Many become wheelchair bound and usually have a somewhat shortened life expectancy. - Type IV, or moderately severe OI, is similar to type I, although persons with type IV often need braces or crutches to walk. Life expectancy is normal or near normal. There are other types of OI, but they occur very infrequently and most are considered subtypes of the moderately severe form (type IV). Complications are largely based on the type of OI present. They are often directly related to the problems with weak bones and multiple fractures. Complications may include: - Hearing loss (common in type I and type III) - Heart failure (type II) - Respiratory problems and pneumonias due to chest wall deformities - Spinal cord or brain stem problems - Permanent deformity When to Contact a Medical Professional Severe forms are usually diagnosed early in life, but mild cases may not be noted until later in life. Make an appointment with your health care provider if you or your child have symptoms of this condition. Genetic counseling is recommended for couples considering pregnancy if there is a personal or family history of this condition. Marini JC. Osteogenesis imperfecta. In: Kliegman RM, Behrman RE, Jenson HB, Stanton BF, eds. Nelson Textbook of Pediatrics. 19th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 692. Reviewed By: Neil K. Kaneshiro, MD, MHA, Clinical Assistant Professor of Pediatrics, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
<urn:uuid:b381b268-f757-4a44-bf47-ec6d709eda09>
seed
Antiperspirants/Deodorants and Breast Cancer - There is no conclusive research linking the use of underarm antiperspirants or deodorants and the subsequent development of breast cancer. - Research studies of underarm antiperspirants or deodorants and breast cancer have been completed and provide conflicting results. - Can antiperspirants or deodorants cause breast cancer? Articles in the press and on the Internet have warned that underarm antiperspirants (a preparation that reduces underarm sweat) or deodorants (a preparation that destroys or masks unpleasant odors) cause breast cancer (1). The reports have suggested that these products contain harmful substances, which can be absorbed through the skin or enter the body through nicks caused by shaving. Some scientists have also proposed that certain ingredients in underarm antiperspirants or deodorants may be related to breast cancer because they are applied frequently to an area next to the breast (2, 3). However, researchers at the National Cancer Institute (NCI), a part of the National Institutes of Health, are not aware of any conclusive evidence linking the use of underarm antiperspirants or deodorants and the subsequent development of breast cancer. The U.S. Food and Drug Administration (FDA), which regulates food, cosmetics, medicines, and medical devices, also does not have any evidence or research data that ingredients in underarm antiperspirants or deodorants cause cancer. - What do scientists know about the ingredients in antiperspirants and deodorants? Aluminum-based compounds are used as the active ingredient in antiperspirants. These compounds form a temporary plug within the sweat duct that stops the flow of sweat to the skin's surface. Some research suggests that aluminum-based compounds, which are applied frequently and left on the skin near the breast, may be absorbed by the skin and cause estrogen-like (hormonal) effects (3). Because estrogen has the ability to promote the growth of breast cancer cells, some scientists have suggested that the aluminum-based compounds in antiperspirants may contribute to the development of breast cancer (3). Some research has focused on parabens, which are preservatives used in some deodorants and antiperspirants that have been shown to mimic the activity of estrogen in the body’s cells (4). Although parabens are used in many cosmetic, food, and pharmaceutical products, according to the FDA, most major brands of deodorants and antiperspirants in the United States do not currently contain parabens. Consumers can look at the ingredient label to determine if a deodorant or antiperspirant contains parabens. Parabens are usually easy to identify by name, such as methylparaben, propylparaben, butylparaben, or benzylparaben. The National Library of Medicine’s Household Products Database also has information about the ingredients used in most major brands of deodorants and antiperspirants. The belief that parabens build up in breast tissue was supported by a 2004 study, which found parabens in 18 of 20 samples of tissue from human breast tumors (5). However, this study did not prove that parabens cause breast tumors (4). The authors of this study did not analyze healthy breast tissue or tissues from other areas of the body and did not demonstrate that parabens are found only in cancerous breast tissue (5). Furthermore, this research did not identify the source of the parabens and cannot establish that the buildup of parabens is due to the use of deodorants or antiperspirants. More research is needed to specifically examine whether the use of deodorants or antiperspirants can cause the buildup of parabens and aluminum-based compounds in breast tissue. Additional research is also necessary to determine whether these chemicals can either alter the DNA in some cells or cause other breast cell changes that may lead to the development of breast cancer. - What have scientists learned about the relationship between antiperspirants or deodorants and breast cancer? In 2002, the results of a study looking for a relationship between breast cancer and underarm antiperspirants/deodorants were reported (6). This study did not show any increased risk for breast cancer in women who reported using an underarm antiperspirant or deodorant. The results also showed no increased breast cancer risk for women who reported using a blade (nonelectric) razor and an underarm antiperspirant or deodorant, or for women who reported using an underarm antiperspirant or deodorant within 1 hour of shaving with a blade razor. These conclusions were based on interviews with 813 women with breast cancer and 793 women with no history of breast cancer. Findings from a different study examining the frequency of underarm shaving and antiperspirant/deodorant use among 437 breast cancer survivors were released in 2003 (7). This study found that the age of breast cancer diagnosis was significantly earlier in women who used these products and shaved their underarms more frequently. Furthermore, women who began both of these underarm hygiene habits before 16 years of age were diagnosed with breast cancer at an earlier age than those who began these habits later. While these results suggest that underarm shaving with the use of antiperspirants/deodorants may be related to breast cancer, it does not demonstrate a conclusive link between these underarm hygiene habits and breast cancer. In 2006, researchers examined antiperspirant use and other factors among 54 women with breast cancer and 50 women without breast cancer. The study found no association between antiperspirant use and the risk of breast cancer; however, family history and the use of oral contraceptives were associated with an increased risk of breast cancer (8). Because studies of antiperspirants and deodorants and breast cancer have provided conflicting results, additional research is needed to investigate this relationship and other factors that may be involved. - Where can someone get more information on breast cancer risk? People who are concerned about their breast cancer risk are encouraged to talk with their doctor. More information about breast cancer risk can be found in What You Need To Know About™ Breast Cancer, or in the PDQ® Prevention Summary for Patients on Breast Cancer. - Jones J. Can rumors cause cancer? Journal of the National Cancer Institute 2000; 92(18):1469–1471. [PubMed Abstract] - Darbre PD. Underarm cosmetics and breast cancer. Journal of Applied Toxicology 2003; 23(2):89–95. [PubMed Abstract] - Darbre PD. Aluminium, antiperspirants and breast cancer. Journal of Inorganic Biochemistry 2005; 99(9):1912–1919. [PubMed Abstract] - Harvey PW, Everett DJ. Significance of the detection of esters of p-hydroxybenzoic acid (parabens) in human breast tumours. Journal of Applied Toxicology 2004; 24(1):1–4. [PubMed Abstract] - Darbre PD, Aljarrah A, Miller WR, et al. Concentrations of parabens in human breast tumours. Journal of Applied Toxicology 2004; 24(1):5–13. [PubMed Abstract] - Mirick DK, Davis S, Thomas DB. Antiperspirant use and the risk of breast cancer. Journal of the National Cancer Institute 2002; 94(20):1578–1580. [PubMed Abstract] - McGrath KG. An earlier age of breast cancer diagnosis related to more frequent use of antiperspirants/deodorants and underarm shaving. European Journal of Cancer 2003; 12(6):479–485. [PubMed Abstract] - Fakri S, Al-Azzawi A, Al-Tawil N. Antiperspirant use as a risk factor for breast cancer in Iraq. Eastern Mediterranean Health Journal 2006; 12(3–4):478–482. [PubMed Abstract] # # # Related NCI materials and Web pages: How can we help? We offer comprehensive research-based information for patients and their families, health professionals, cancer researchers, advocates, and the public. - Call NCI’s Cancer Information Service at 1–800–4–CANCER (1–800–422–6237) - Visit us at http://www.cancer.gov or http://www.cancer.gov/espanol - Chat using LiveHelp, NCI’s instant messaging service, at http://www.cancer.gov/livehelp - E-mail us at [email protected] - Order publications at http://www.cancer.gov/publications or by calling 1–800–4–CANCER - Get help with quitting smoking at 1–877–44U–QUIT (1–877–448–7848) This text may be reproduced or reused freely. Please credit the National Cancer Institute as the source. Any graphics may be owned by the artist or publisher who created them, and permission may be needed for their reuse.
<urn:uuid:f38ef9e4-3792-4c6c-a69d-418dfe896b70>
seed
Troponins (highly sensitive biomarkers of myocardial damage) increase counts of myocardial infarction (MI) in clinical practice, but their impact on trends in admission rates for MI in National statistics is uncertain. Cases coded as MI or other cardiac diagnoses in the Hospital Morbidity Data Collection (MI-HMDC) in Western Australia in 1998 and 2003 were classified using revised criteria for MI developed by an International panel convened by the American Heart Association (AHA criteria) using information on symptoms, ECGs and cardiac biomarkers abstracted from samples of medical notes. Age-sex standardized rates of MI-HMDC were compared with rates of MI based on AHA criteria including troponins (MI-AHA) or traditional biomarkers only (MI-AHAck). Between 1998 and 2003, rates of MI-HMDC decreased by 3.5% whereas rates of MI-AHA increased by 17%, a difference largely due to increased false-negative cases in the HMDC associated with marked increased use of troponin tests in cardiac admissions generally, and progressively lower test thresholds. In contrast, rates of MI-AHAck declined by 18%. Increasing misclassification of MI-AHA by the HMDC may be due to reluctance by clinicians to diagnose MI based on relatively small increases in troponin levels. These influences are likely to continue. Monitoring MI using AHA criteria will require calibration of commercially available troponin tests and agreement on lower diagnostic thresholds for epidemiological studies. Declining rates of MI-AHAck are consistent with long-standing trends in MI in Western Australia, suggesting that neither MI-HMDC nor MI-AHA reflect the true underlying population trends in MI. Keywords:AHA Medical/Scientific Statements; cardiac biomarkers; diagnosis; myocardial infarction; trends Coronary heart disease (CHD), despite declining mortality, remains a major health problem in developed countries [1-3]. Reliable methods for monitoring acute CHD events, including myocardial infarction (MI), are therefore essential for evaluation of preventive and clinical services, particularly in view of the increasing prevalence of obesity and diabetes that could diminish or reverse the favourable trends in mortality. The World Health Organization (WHO) MONICA Project, which monitored trends and determinants of coronary heart disease including non-fatal MI for 10 years in 25 countries between 1983 and 1995, demonstrated the importance of population-based registers to monitor MI, using standardised methods of case-finding and unchanging diagnostic criteria. Unfortunately, such registers are costly and there have been few recent national or international register-based studies of trends in MI [4-7]. In many jurisdictions, routinely collected mortality and hospital morbidity data are therefore the most commonly used alternative for monitoring trends in MI. While such data have major shortcomings, studies in Finland, Sweden and Western Australia have previously shown reasonable agreement between trends based on registers and administrative data [7-9]. However, the recent widespread introduction into clinical practice of highly sensitive and specific biomarkers of myocardial damage, particularly troponin tests, raises doubts about the reliability of administrative data for monitoring MI. Several studies have demonstrated greatly increased counts of MI using troponin tests compared with traditional biomarkers [4,10]. Studies in Perth, Western Australia, and Denmark have shown that declining trends in hospital MI admissions reversed or levelled out since the introduction of troponin tests in 1996 [5,11]. In recognition of the potential problems for epidemiological studies, a panel of international experts, meeting under the auspices of the American Heart Association (AHA), have developed new criteria for MI for use in epidemiological studies, referred to here as the 'AHA criteria', which emphasise the importance of troponin in the diagnosis of MI . So far, there have been few population-based studies exploring the practical issues of implementing the new criteria or assessing the impact of troponin tests on trends in MI based on administrative data. This study used the Hospital Morbidity Data Collection (HMDC), one of the core administrative datasets in the Western Australian Data Linkage System , to compare counts of non-fatal MI (MI-HMDC) in 1998 and 2003 with counts based on AHA criteria using all biomarkers including troponins (MI-AHA), or traditional biomarkers only (MI-AHAck) including creatinine kinase (CK) or CK-MB in the classification algorithm. The study population consisted of residents of the Perth Statistical Division (population 1.43 million in 2003) of Western Australia aged 35-79 years admitted to hospital in 1998 or 2003 for cardiac conditions or chest pain (International Classification of Diseases (ICD) 9th revision codes 401-429, 786.5; or ICD10 Australian Modification codes I10-I52, R07). Electronic records for the study population were extracted from the HMDC, identified by hierarchical discharge diagnosis as shown in Table 1, then linked to provide 28-day episodes of care. Table 1. Hierarchical classification of cardiovascular discharge diagnoses in non-fatal cases which identified the validation population and sample Non-fatal cases were defined as patients who were alive 28 days after admission. The episode records were then linked electronically to results of cardiac biomarkers provided by biochemistry departments. Cases were then selected for validation of discharge diagnosis codes against information abstracted from medical notes, using the sampling scheme outlined in Additional File 1, Table S1. In brief, cases for validation consisted of random samples of all non-fatal cases coded as MI or unstable angina pectoris (UAP) in any diagnosis field, or cases with a principal diagnosis of other heart disease or chest pain who had positive biomarker test results (suspected false negative MI). We excluded booked admissions for coronary artery bypass surgery or heart valve operations; angiograms with length of stay ≤ 1 day; and heart transplants. Additional file 1. Methods: Additional Detail. Provides additional information on the selection of the validation sample, and the classification of biomarkers, ECGs and symptoms using AHA definitions. Includes two additional tables associated with methods and results sections of the main text, including Table 1 from the AHA 2003 classification showing counts from our validation samples. Format: PDF Size: 20KB Download file This file can be viewed with: Adobe Acrobat Reader Data were abstracted from medical notes directly into a Microsoft Access database by trained staff using a standardised data collection format. Data included: symptoms present on admission, results of biomarker tests (daily results for up to 5 days), reasons for possible false elevations of cardiac biomarkers (such as angioplasty, cardiac surgery, other major surgery, trauma, severe renal failure), and photocopies of up to five electrocardiographs (ECG). Other data, including demographic details and dates, were added directly from the HMDC extract. Classification of myocardial infarction by 'AHA criteria' A computer algorithm was developed to classify cases as Definite, Probable, Possible or Not MI according to AHA criteria based on the combination of symptoms, biomarker results and ECG abnormalities as defined in the International panel report and illustrated in Additional File 1, Table S2. All analyses were carried out in SAS version 9.1.3 SP4 for Windows. Population estimates of the total number of cases in each sample group were calculated by inflating the sampled cases by their sampling fraction (see Additional File 1, Table S1). This was achieved using the inverse of the sampling fraction as the variable in the weight statement of Proc Freq in SAS, and in Proc SurveyFreq when calculating sensitivity and PPV. Results are reported as counts and proportions, together with population estimates of sensitivity, positive predictive value (PPV) and their 95% confidence intervals for the HMDC coding of MI relative to 'AHA criteria' for Definite/Probable MI (Positive MI) and Definite/Probable/Possible MI (Any MI). The percentage by which HMDC misclassified MI as defined by 'AHA criteria' was calculated by . Counts of MI-HMDC were compared separately with counts of MI based on AHA criteria using all available biomarkers, including troponins (MI-AHA), or traditional biomarkers only (MI-AHAck). To allow for the effect of population increase between 1998 and 2003, we estimated age-sex standardised rates of admission for non-fatal MI in age group 35-79 years using the direct method by 5-year age group and sex, with the Australian estimated population at 30 June 2001 as the standard. Diagnostic thresholds for troponin tests were lower in 2003 than in 1998 as the sensitivity of the assays improved. We compared the distribution of ECG changes and biomarker results in cases of Positive MI coded in the HMDC as MI (true-positives) or Not MI (false-negatives) to examine the extent to which the lower thresholds were associated with false-negative cases. This study was approved by the Human Research Ethics Committees of the University of Western Australia, each of the eight hospitals from which data were collected, and by the Western Australian Confidentiality of Health Information Committee. The study was granted a waiver of informed consent. In 1998, there were 8939 28-day episodes of care for heart conditions or chest pain (Table 1) of which 3522 met our criteria for validation (see Additional File 1, Table S1) and from which 1456 non-fatal episodes were sampled for validation against medical notes. The equivalent numbers for 2003 were 9188 episodes of care, 3297 meeting validation criteria and 1108 sampled for validation. Between 1998 and 2003, episodes of care for MI increased by 11% whilst those for UAP decreased by 26%. In 1998, 81% of episodes of care for MI and 53% of UAP had troponin tests, increasing to 97% and 93% respectively in 2003. In comparison, the traditional biomarkers (CK and CK-MB) were used in 96% of episodes of MI care in 1998 and 93% in 2003. The prevalence of troponin tests in cases not coded as MI or UAP increased from 33% to 63%. AHA classification scheme Full details of the final AHA diagnostic classification of validated cases together with corresponding classifications of symptoms, ECG and biomarker results for 1998 and 2003 are shown in Additional File 1, Table S2. A feature of this is the dominating influence of biomarkers, particularly troponins, on the classification of MI. For example, in 1998, 74.8% of cases were classified as Definite MI because of biomarkers alone, whilst only 6.8% were classified as Definite MI based on evolving ECG changes alone. The corresponding values in 2003 were 81.1% (1342/1654) and 8.3% (138/1654). Comparison of MI-HMDC and MI-AHA Table 2 shows the AHA classification of MI cross-tabulated by HMDC diagnosis. As there were few cases of AHA Probable MI in either year (1.2% in 1998 and 3.5% in 2003), these were combined with Definite MI in all tables (Positive MI). The sensitivity, positive predictive value (PPV) and estimated misclassification of MI-HMDC for Positive MI and Any MI are shown in Table 3. In 1998, MI-HMDC overestimated Positive MI by 7.4% but underestimated this by 10.7% in 2003. Any MI (Positive + Possible MI) was underestimated by 12.4% in 1998 and by 21.3% in 2003. These temporal changes in misclassification between 1998 and 2003 resulted from general improvement in PPV (from 77.3% to 83.5% for Positive MI) but deteriorating sensitivity (from 82.9% to 74.6%). For example, in 1998, 12.3% of cases classified as Positive MI were coded as UAP in the HMDC (Table 2), and 4.8% as other heart diseases or chest pain, but in 2003 the respective proportions were 15.1% for UAP and 10.3% for other cardiac conditions. When conditions other than ischemic heart disease were excluded from the analysis, the sensitivity of MI-HMDC increased to 86.1% from 82.9% in 1998, and to 80.8% from 74.6% in 2003. Table 2. Hierarchical diagnosis and classification of myocardial infarction based on AHA Criteria for non-fatal population estimates Table 3. Sensitivity and PPV of MI-HMDC for non-fatal population estimates based on AHA classification of myocardial infarction Table 3 also demonstrates some variation in PPV and sensitivity of MI-HMDC by broad age group and sex, but there was no consistent pattern except for generally lower PPV and sensitivity in the 70-79 year age group, and the variation could be due to chance as seen through the 95% confidence intervals. Characteristics of false-negative cases To understand the declining sensitivity of MI-HMDC between 1998 and 2003 we studied the distributions of ECG changes and biomarker results in false-negative cases (AHA Positive MI, but not coded as MI in the HMDC). In 1998, 49% of MI-HMDC had ECG evidence of MI (diagnostic or positive ECGs), compared with 31% in false-negative cases. In 2003, the corresponding percentages were 42% and 23%. Conversely, in both years 28% of MI-HMDC had normal/other ECGs compared with around 45% in false-negative cases. We also found that troponin levels were substantially lower in false-negative cases than in true-positive cases as illustrated in Figure 1 which compares the distribution of troponin levels in true-positive and false-negative cases based on AHA Positive MI in 2003. Figure 1. Box plots of distribution of maximum values for troponin I and T according to HMDC coding of MI or not MI in cases classified as AHA Definite or Probable MI (Positive MI) based on positive troponin tests but non-specific or normal ECGs in the 2003 validation sample. + indicates mean value of distribution; • indicates extreme values beyond the y-axis scale. MI-HMDC represents true-positives and HMDC Not MI represents false-negatives. HMDC: Hospital Morbidity Data Collection; MI: myocardial infarction. a p < 0.0001 and b p = 0.0006 for MI vs not MI (two-sided Wilcoxon rank sum test with normal approximation). Comparison of counts of MI based on troponins or traditional biomarkers Table 4 shows the AHA classification by HMDC diagnosis when troponin tests are excluded from the classification algorithm. Compared with Table 2, this shows that troponin tests increased the number of Positive MI cases from 913 to 1324 in 1998 (45% increase) and from 853 to 1768 in 2003 (107% increase). For Any MI, troponin tests were associated with 43% more cases in 1998 and 82% more cases in 2003. Figure 2 demonstrates these changes in counts as age-sex standardised rates to allow for population increase. Between 1998 and 2003, there was a small decline (3.5%) in age-sex standardised rates of admission for non-fatal MI-HMDC. In contrast, rates of Positive MI (with troponin) increased by 17% while rates of Positive MI (without troponin) declined by 18%. Rates of Any MI (with troponin) increased by 24% while rates of Any MI (without troponin) declined by 1.5%. Table 4. Hierarchical diagnosis and classification of myocardial infarction based on AHA Criteria using only CK biomarkers Figure 2. Trends in age-sex standardized admission rates for non-fatal MI in age group 35-79 years as recorded in the HMDC compared with AHA classification of MI. AHA: American Heart Association; HMDC: Hospital Morbidity Data Collection; MI: myocardial infarction; Any MI: AHA Definite/Probable/Possible MI; Positive MI: AHA Definite or Probable MI; MI-HMDC: MI coded in the HMDC. The introduction of troponins as highly sensitive and specific diagnostic tests for MI has revolutionised the clinical management of suspected myocardial infarction, but while their utility in clinical practice is unquestioned, problems in monitoring population trends in MI remain unresolved. We have made a three-way comparison of counts of MI-HMDC, MI-AHA (with troponin) and MI-AHAck (without troponin). When troponin is included in the AHA classification, MI-HMDC overestimated counts of Positive MI in 1998 but under-estimated the counts in 2003. This disparity between the clinical (coded) diagnosis in 2003 is in agreement with the finding by Roger and colleagues in their prospective study of the effects of troponin tests on counts of MI in Olmsted County, that found substantially lower counts of cases with a final coded diagnosis of MI in medical records compared with counts based on troponin tests . They attributed this to a reluctance by clinicians to always accept relatively low levels of troponin as diagnostic of MI. In contrast, when troponin was excluded from the AHA classification, MI-HMDC overestimated counts of Positive MI which actually decreased between 1998 and 2003. There was an even greater difference between the increasing rates of MI-AHA (with troponin) and the decreasing rates of MI-AHAck (without troponin). The decrease in MI-AHAck is consistent with the decline in admission rates for MI in Perth in the 10 years prior to the introduction of troponin tests . The marked divergence between MI-AHA (with troponin) and MI-AHAck is also consistent with a further study in Olmsted County that found that from 1987 to 2006, rates of MI declined by 20% when traditional biomarkers only were used in the diagnosis of MI, whereas rates of MI increased when based on troponin tests . An essential requirement for monitoring MI is that classification should be based on objective criteria that remain constant over time. In the case of troponin-based criteria this is unlikely because of progressive lowering of diagnostic thresholds as the precision of tests improves, and because of variation within and between hospitals in the several commercial, non-standardised tests in use. This is likely to increase the number of false negative cases. A further likely reason for the increase in false-negative cases in 2003 was the marked general increase (from 33 to 66%) in use of troponin tests in all cardiac admissions other than MI or UAP. Thus, even though the prevalence of positive troponin tests in such cases was small, it had a relatively large negative impact on the sensitivity of MI-HMDC against MI-AHA. Strengths of the study Western Australia is geographically isolated with relatively small population losses from emigration, and is thus ideal for epidemiological studies. It has comprehensive, linked health statistics systems spanning 30 years , and has been the site of several previous studies of trends in MI, including the WHO MONICA Project [9,13,15]. Record linkage allowed us to define 28-day episodes of care, thus eliminating inflation due to transfers and early readmissions, and provided a total population sampling frame for validation studies. Direct linkage of HMDC records to laboratory records of biomarker results allowed us to identify efficiently any potential false-negative cases of MI. Finally, despite the rapid uptake of troponin tests in Perth, the continued high use of CK tests in the diagnosis of MI allowed us to classify cases using both new and traditional biomarkers, providing a direct measure of the impact of troponin tests on the diagnosis of MI in administrative data over time. Limited resources forced us to adopt a sampling strategy to validate the coding of MI in the HMDC. If sampling was not strictly random, errors may have occurred in population estimates. Our strategy of linking biomarker test results to identify possible false-negative cases of MI in the HMDC may have identified some cases in which elevated troponins were not due to an acute ischaemic event (for example, in chronic heart failure) but which nevertheless met the 'AHA criteria' for MI. Lack of statistical power due to limited resources also prevented us from fully exploring possible differences in the impact of troponins on rates of MI by age and gender. Our study has identified a number of issues for further investigation before the predominately troponin-based 'AHA criteria' can be used with confidence in studies of trends in MI. The most urgent need is for calibration of the several commercial troponin tests in use, and for agreement on stratification of troponin results that would permit valid comparison of positive tests over time, despite changing diagnostic thresholds. While universal standardization of troponin tests is unlikely ever to be possible, calibration for studies of trends at regional level, particularly when only a few laboratories are involved, should be possible. This would not invalidate comparative studies within or between countries in which the primary focus is on trends rather than cross-sectional differences between rates, and providing that the methods of calibration are explicitly described. In view of the marked general increases in the use of troponin tests in acutely ill patients with both cardiac and non-cardiac conditions, further research is required to determine whether the search for false-negative cases should be restricted to cases coded within the ICD rubrics for ischaemic heart disease or chest pain. Finally, the International expert panel made no recommendations about the AHA categories of MI (Definite, Probable, Possible) that should be included in epidemiological studies. Definite plus Probable MI in the AHA classification appears to be comparable to "Definite MI" used as the main non-fatal event in the MONICA Project and appears to be similar to the definition of MI used in the Atherosclerosis Risk in Communities (ARIC) study . Whether Possible MI (or any subset thereof) should also be included needs to be determined. What is already known on this subject • Troponin tests increase counts of myocardial infarction (MI) in administrative data compared with counts based on traditional cardiac biomarkers. • Long-term trends in MI based on traditional biomarkers are declining, whereas those based on troponin tests are increasing. • Increased counts of MI associated with troponin tests using revised criteria for MI are only partly reflected in administrative data, possibly because physicians are reluctant to diagnose MI on the basis of relatively small increases in troponin levels. What this study adds • This is the first population-wide study to explore the practical issues of implementing the revised criteria for myocardial infarction (MI) for use in epidemiological studies published by the American Heart Association (AHA) International panel in 2003. It shows that for trend analysis at least, the AHA criteria are flawed because they do not recognise variation in troponin assay thresholds in different laboratories or changes in diagnostic thresholds over time. This problem will increase with the development of even more sensitive troponin assays. • Studies of trends in MI (as exemplified by the WHO MONICA Project) have a basic requirement for unchanging objective criteria. Our study identifies further work that is required to standardise troponin testing for analysis of trends, particularly at the International level, if the AHA criteria are to have any utility. • Future epidemiological studies using AHA criteria will need to recognise the large increase in false-negative cases of MI associated with the large general increase in the prevalence of troponin testing in cases with other cardiac conditions or chest pain. • Since the introductions of troponins, routinely collected hospital statistics no longer reflect true underlying trends in MI. AHA: American Heart Association; HMDC: Hospital Morbidity Data Collection; MI: myocardial infarction; MI-HMDC: MI coded in the HMDC; MI-AHA: MI classified by the AHA criteria (including troponin tests); MI-AHAck: MI classified by the AHA criteria using traditional cardiac biomarkers only (excluding troponin tests); Positive MI: AHA Definite or Probable MI; Any MI: AHA Definite, Probable or Possible MI; UAP: unstable angina pectoris The authors declare that they have no competing interests. All authors read and approved the final manuscript. MH, MK, JF, JR, PS, JH are chief investigators for this study. They designed the study, provided epidemiological (MH, JF), clinical (JR, JH, PS) and statistical (MK) guidance, and assisted in preparation and review of the manuscript. MH was the lead author for sections on Background, Discussion and Conclusions. FS implemented the study, managed research staff and the collection of data, carried out the data mining and all of the statistical analyses, created the Endnote library of references, and prepared the manuscript. FS was the lead author for the Methods and Results sections of the manuscript, and reviewed the Background, Discussion and Conclusions. SR prepared the data files, merging and linking laboratory data with administrative data, assisted with identifying and sampling of the validation population and sample, and reviewed the manuscript. PB prepared the list of variables for data collection, trained the research staff in the data collection techniques from medical notes, and reviewed the manuscript. We thank Dr Christian Gardner, Kim Goodman, Erica John, Nicole Schaefer, Della Isackson and Dr Mira Rimajova for data collection; the Data Linkage Unit, Department of Health of WA for data extraction and linkage; Pathwest, St John of God Pathology and Western Diagnostic Pathology for providing laboratory data; and Dr Chotoo Bhagat (Queen Elizabeth II Medical Centre) and John Blennerhassett (Biochemistry Department, Royal Perth Hospital) for advice on interpretation of cardiac biomarker results. This work was supported by a project grant (353671) from the National Health and Medical Research Council (NHMRC) of Australia. Beaglehole R, Stewart AW, Jackson R, Dobson AJ, McElduff P, D'Este K, Heller RF, Jamrozik KD, Hobbs MS, Parsons R, Broadhurst R: Declining rates of coronary heart disease in New Zealand and Australia, 1983-1993. Tunstall-Pedoe H, Kuulasmaa K, Mahonen M, Tolonen H, Ruokokoski E, Amouyel P: Contribution of trends in survival and coronary-event rates to changes in coronary heart disease mortality: 10-year results from 37 WHO MONICA project populations. Monitoring trends and determinants in cardiovascular disease. Salomaa V, Koukkunen H, Ketonen M, Immonen-Raiha P, Karja-Koskenkari P, Mustonen J, Lehto S, Torppa J, Lehtonen A, Tuomilehto J, Kesaniemi YA, Pyorala K: A new definition for myocardial infarction: what difference does it make? Rosamond WD, Chambless LE, Sorlie PD, Bell EM, Weitzman S, Smith JC, Folsom AR: Trends in the sensitivity, positive predictive value, false-positive rate, and comparability ratio of hospital discharge diagnosis codes for acute myocardial infarction in four US communities, 1987-2000. Hammar N, Nerbrand C, Ahlmark G, Tibblin G, Tsipogianni A, Johansson S, Wilhelmsen L, Jacobsson S, Hansen O: Identification of cases of myocardial infarction: hospital discharge data and mortality data compared to myocardial infarction community registers. Mahonen M, Salomaa V, Brommels M, Molarius A, Miettinen H, Pyorala K, Tuomilehto J, Arstila M, Kaarsalo E, Ketonen M, Kuulasmaa K, Lehto S, Mustaniemi H, Niemela M, Palomaki P, Torppa J, Vuorenmaa T: The validity of hospital discharge register data on coronary heart disease in Finland. Jamrozik K, Dobson AJ, Hobbs MS, McElduff P, Ring I, D'Este K, Crome M: Monitoring the incidence of cardiovascular disease in Australia. In Cardiovascular Disease Series No17. Canberra: Australian Institute of Health and Welfare; 2001. Luepker RV, Apple FS, Christenson RH, Crow RS, Fortmann SP, Goff D, Goldberg RJ, Hand MM, Jaffe AS, Julian DG, Levy D, Manolio T, Mendis S, Mensah G, Pajak A, Prineas RJ, Reddy KS, Roger VL, Rosamond WD, Shahar E, Sharrett AR, Sorlie P, Tunstall-Pedoe H: Case definitions for acute coronary heart disease in epidemiology and clinical research studies: a statement from the AHA Council on Epidemiology and Prevention; AHA Statistics Committee; World Heart Federation Council on Epidemiology and Prevention; the European Society of Cardiology Working Group on Epidemiology and Prevention; Centers for Disease Control and Prevention; and the National Heart, Lung, and Blood Institute. Am J Epidemiol 1989, 129:655-668. PubMed Abstract White AD, Folsom AR, Chambless LE, Sharret AR, Yang K, Conwill D, Higgins M, Williams OD, Tyroler HA, The ARIC Investigators: Community surveillance of coronary heart disease in the Atherosclerosis Risk in Communities (ARIC) Study: Methods and initial two years' experience. The pre-publication history for this paper can be accessed here:
<urn:uuid:77b57493-c8cf-4f2a-b828-a0f539e1cf7e>
seed
Diabetes is the leading cause of renal failure that requires dialysis. The disease generates such a hostile environment that it forces the kidney cells to kill themselves, progressively reducing the renal functions of the kidneys. A research group from the department of medicine at the Universidad Autónoma de Madrid (UAM) has studied the causes and consequences of the cell suicide of renal cells. Diabetes slowly destroys the kidney up to the point where the renal function has to be taken on by dialysis (artificial kidney) or a transplanted kidney. It is the leading cause of end stage renal failure that requires dialysis. The destruction of the kidney comes from the loss of its cells, which recent studies have demonstrated to be caused by apoptosis, a process that, for cells, involves death by suicide. Cells suicide when their environment does not “please” them, when their surroundings feel hostile or stressful. The Spanish team managed by Alberto Ortiz, professor of the department of medicine of the UAM based at the Jiménez Díaz-Capio foundation, has spent years studying the causes and the consequences of kidney cell suicide, specializing in “psycho-cellulology”. The team analyzed the genes related with apoptosis as a part of a European collaborative effort (European Renal Biopsy Bank) that studies the expression pattern of genes found in patients suffering from diabetic nephropathy. The affected kidneys exhibited an abnormal expression of 112 genes that regulate the cell suicide. Among these genes, the Spanish team identified a protein of the Tumour Necrosis Factor (TNF) family, called TRAIL, as the key to the cell suicide in diabetes affected kidneys. In these kidneys, large quantities of TRAIL can be found that surprisingly do not come from the increased glucose levels that define the disease, but from the inflammation that accompanies the renal damage. Inflammation and higher glucose levels favour renal damage; the inflammation raises the TRAIL levels while the hyperglycaemia generates a stressful environment that, in the presence of TRAIL, leads to cell suicide. The role played by inflammation in the cell suicide that leads to renal damage suggests that the treatment of diabetic nephropathies requires a multiple attack to control the glucose levels while also acting on the renal inflammation and lethal proteins like TRAIL. This study is part of the efforts carried out by the Red de Investigación Renal (RedInRen), financed by the Carlos III institute, to expose the mechanisms of renal lesions and develop new treatments for renal diseases. Cite This Page:
<urn:uuid:af81eaaa-b22a-45f5-861b-69be9cd55a4c>
seed
By Gretchen Argast, OSI Pharmaceuticals, LLC and Paul Fricker, MathWorks Epithelial to mesenchymal transition (EMT), a process vital to embryonic development, has been linked to the spread of cancer in adults. As a result, there is increased interest in developing cancer drugs that target EMT in addition to drugs that target cell proliferation and survival. Until recently, measuring how a drug affected one aspect of EMT, cell scattering, was a manual process that involved subjectively assessing the relative closeness of cells in a culture. Researchers at OSI Pharmaceuticals worked with MathWorks consultants to develop an automated system for quantifying the scattering of cells in a sample. Based on MATLAB®, Image Processing Toolbox™, and Statistics Toolbox™, the system measures nucleus-to-nucleus distances of nearest-neighbor cells. The ability to measure scattering is essential to evaluating the efficacy of drugs that may inhibit or reverse EMT because it gives researchers a reliable way to compare the effects of one drug against another. In humans and other vertebrates, there are two basic cell types: epithelial and mesenchymal. Several morphological and functional characteristics differentiate the two cell types. For example, epithelial cells depend on cell-to-cell contact for survival. Mesenchymal cells, in contrast, are characterized by their independence from nearby cells and by their mobility, two requirements for cell scattering. In EMT, cells lose their epithelial traits and acquire mesenchymal traits. EMT is essential for developing embryos because it produces mesenchymal cells that can migrate to form bone, cartilage, and other tissue where needed. In adults, however, EMT is associated with pathologies such as cancer and fibrosis. Because mesenchymal tumor cells are more mobile, and thus more invasive, than epithelial tumor cells, scientists believe that they facilitate metastasis, or the spread of tumor cells. EMT also diminishes the effectiveness of chemotherapy treatments that target epithelial cells. OSI researchers have developed pancreatic and lung tumor models and identified a set of ligands, or binding molecules, that drive EMT in these models. Two of these ligands, hepatocyte growth factor (HGF) and oncostatin M (OSM), induced EMT in the models, enabling us to produce samples that demonstrate the cell scattering associated with EMT. The samples are stained so that the nucleus of each cell shows blue in the images captured by our microscopes (Figure 1). To quantify the scattering of the cells, we developed a numerical procedure that uses image processing and statistical analyses. Measuring the spatial density of the cells would be relatively straightforward if the images were completely covered by the cells: We would simply count the number of nuclei in each image and then divide by the total image area. The images that we generate are almost always partially covered, however, making it difficult to estimate the cell density correctly. We decided to develop an alternative approach to quantify the scattering, based on measurements of the distances between the cell nuclei. To analyze the cell images, we used an algorithm consisting of four main steps: Because the intensity scaling is consistent across all the captured images, we can capture most of the individual blobs using a single hard-coded threshold value. This thresholding procedure produces a binary image in which the cell nucleus is indicated by 1, or white, and its absence is indicated by 0, or black (Figure 2). Using Image Processing Toolbox, we analyzed these black and white images to find the locations and sizes (areas) of all the blobs. In some cases, a few cells are so close together that their nuclei appear to be touching one another, and they cannot be distinguished as separate nuclei. To enhance the processing of the images, we sorted the blobs into three categories based on their size. Those with areas below a certain size were deemed to be noise or partially occluded cells, and were discarded from the subsequent analysis. Blobs of intermediate size were classified as individual nuclei that had already been successfully segmented. The largest blobs were presumed to be clusters of overlapping cells requiring further analysis. To distinguish the individual nuclei within the larger blobs, the algorithm crops the subregions of the image containing the largest blobs and performs local, adaptive thresholding to more accurately distinguish the individual cells (Figure 3). At the end of the image analysis procedure, the algorithm has identified the location of most of the cell nuclei in the image, and stored this data in an array. The success of the algorithm can be verified visually by overlaying the input images with markers at each measured nucleus location (Figure 4). Once we have processed the images and obtained an array of cell nucleus coordinates, we use basic MATLAB matrix operations to compute the distances between an individual nucleus and all the other nuclei in the cell cluster. To assess the scattering of the cells, we compute the distance between each cell and its nearest neighbor. Each image generates a set of nearest-neighbor distances, with one value for each cell. The distance values computed from the image data are initially measured in pixels, and are converted to microns using a known length scale. MATLAB histograms of these nearest-neighbor distances show clearly that the data fits into meaningful distribution patterns. These patterns reveal distinct differences between each of the four types of cells that we were studying: untreated, HGF-treated, OSM-treated, and HFG+OSM-treated lung cancer cells (Figure 5). These histogram results suggested that the data could be characterized using a statistical distribution. Using Statistics Toolbox we fitted the measured distance values to a series of probability distributions. Narrowing our search to asymmetric, continuous distributions, after an iterative process we found that the loglogistic distribution provided the best fit for the nearest-neighbor distance results. In addition to characterizing the scattering of the cells, one of the main objectives of this project was to develop a method for differentiating the degree of scattering produced by the treatment of cell samples with different ligands. To accomplish this, we used MATLAB to compute the mean (μ) and variance (σ) parameters for the loglogistic distribution for each of the four samples (Figure 6). The statistical fitting plots show that the computed values of μ and σ capture distinct differences in the magnitude of cell scattering in the four data sets. Conversely, when these parameters are computed for a given data set, they can be used to identify which ligand (HGF, OSM, or HGF+OSM) was used to treat the original cell sample. The distributions show that either ligand alone induced scattering in the cells, and that the combined ligand treatment resulted in a further increase in scattering. These distributions reflect what we observe qualitatively in the cells after treatment with ligands. From these results we concluded that the mean and variance parameters of the loglogistic distribution fitting of computed nearest-neighbor distances could be used to reliably quantify the scattering of cell nuclei in a given sample. In addition to characterizing the responses of the cells to different ligands, we also looked at the effect of drug treatment on the degree of cell scattering. We computed the loglogistic distributions for samples treated with HGF+OSM that were also treated with increasing concentrations of a drug that blocks the effects of HGF (50 nM to 2 μM) (Figure 7). At concentrations of 500 nM and above, the drug inhibited the effects of HGF and reduced the degree of scattering to one that approximated the effects of OSM by itself. This type of analysis is essential for determining the optimal dose for a new drug. At the beginning of the EMT quantification project, our goal was to use image analysis techniques with our microscope data to quantify the scattering or density of cells in our samples. After successfully analyzing the basic attributes of the cell nuclei using MATLAB and Image Processing Toolbox, we realized that the resulting data could best be characterized in terms of a statistical distribution. It was easy to transition to a statistical analysis of the data using Statistics Toolbox. MATLAB enabled us to work within a single development environment, from the initial image thresholding and nearest-neighbor distance calculations, through selecting and validating an appropriate statistical distribution, to the final comparison of different ligand dose responses. With a system in place for quantifying the scattering of cells in a sample, OSI researchers now have an objective computational method for measuring the ability of drugs in development to reduce or reverse EMT, and potentially, for increasing the drug’s ability to inhibit cancer metastasis. Gretchen Argast is a Senior Research Scientist at OSI Pharmaceuticals with expertise in developing EMT models and assays for drug discovery research, as well as translational research for more advanced programs. She holds a B.A. in Biological Sciences from the University of Chicago and a Ph.D. in Pathology from the University of Washington. Paul Fricker is a MathWorks Principal Consulting Engineer. Paul has more than 15 years’ experience in signal and image processing, modeling and simulation, and application development. He holds a B.Sc. in Chemistry from Dalhousie University, an M.Sc. in Physics from the University of Toronto, and a Ph.D. in Civil Engineering from Massachusetts Institute of Technology. Published 2012 - 92038v00
<urn:uuid:6b1e0ebf-e009-477d-9d9d-4d65c34b6e3b>
seed
Leg pain is defined as a feeling of discomfort or uneasiness along with aching in the leg, varying from mild to severe intensity. Upper leg pain is pain anywhere from him to knees. Lower leg pain is the pain extending from knee to foot. A pain in leg could originate in joints like hip joint, knee joint or ankle joint. When the pain is in leg joints, it could arise from bones, ligaments or tendons. It can also arise from injury to muscles or nerves. Venous stasis and phlebitis can also cause pain in legs. Sometimes the pain can be referred from back as in the case of Sciatica, where the pain extends down the leg from lower back or hips down thighs and calf muscles. A rare case is pain in phantom-limb, when a person feels pain in the part of limb that has been amputated. Causes of Leg Pain - Arthritis – includes osteoarthritis, rheumatoid arthritis and gout. - Injury – can lead to damage to bone, ligament and cartilage and that can lead to severe leg pain. - Sprain- Due to sudden unnatural movements causes leg pain. - Overuse – Overuse of joint can cause bursitis which ultimately leads to joint pain in legs. - Infection – Any kind of infection to the joints may result in leg pain. - Varicose vein – Enlarged distended vein, also causes leg ulcer and itching along with pain - Deep vein thrombosis – Condition in which there is a blood clot in the deeper veins of leg, causes leg pain. - Venous cause – includes phlebitis and venous compression. - Calcium Deficiency – in Osteomalacia, Osteoporosis and Rickets can also cause pain in long bones of legs. - Growth pains – Young children often complaint of pain in legs. Such pain is usually due to stretching of bones and tissue as part of ‘growing’. - Pain in leg muscles can result from calcium or sodium deficiency and also from dehydration. Such pain is often cramping in nature. Vague muscle ache can result from hypothyroidism and myopathy. - Physical examination – Including examination of joints, swelling, dislocation, gait, lymph nodes, movement etc - Blood test – Complete blood count, ESR, rheumatoid factor, uric acid. - Microscopic examination – Of joint fluid or synovial fluid - X-ray knee – To diagnose fracture, osteoarthritis - MRI – To detect ligament rupture and other conditions - MANTOUX TEST – To diagnose tuberculosis. - Other important investigations – venography – to diagnose deep vein thrombosis. - Duplex ultrasound imaging – It helps in getting image of veins, it can also measure flow in the vessels. - Arteriography – To diagnose arterial embolism. - Doppler ultrasound – is the important investigation in case of venous disease. Treatment of Leg Pain Allopathic treatment of pain in legs – Includes pain killers, anti-inflammatory drugs, muscle relaxants and calcium supplements. Anticoagulants, compression bandage, knee caps and surgery is also applicable in certain cases. Physiotherapy also plays an important role in many cases of joint pains. The exact treatment depends upon the underlying cause of leg pain. Homeopathic treatment of leg pain – Homeopathy is one of the most popular holistic systems of medicine. The selection of remedy is based upon the theory of individualization and symptoms similarity. This is the only way through which a state of complete health can be regained by removing all the sign and symptoms from which a patient is suffering. The aim of homeopathy is not only to treat leg pain but to address its underlying cause and individual susceptibility of the patient. For this patient’s current symptoms, past medical history and family history are taken into account. There are many homeopathic remedies which cover the symptoms of leg pain and can be selected on the basis of cause, location, sensation, modalities and extension of the pain. For individualized remedy selection and treatment of leg pain, the patient should consult a qualified homeopathic doctor in person. Some important remedies are given below for the treatment of leg pain: - Bryonia alba. – Pain with inflammation which, is aggravated by movement and relieved by moderate pressure and rest. - Ledum pal. – Excellent remedy for gout and rheumatism which is of ascending nature, better by cold application. - Rhus tox. – Pain aggravated by first movement, damp weather and better by continuous motion. - Colchicum – Pain worse by motion, touch, or mental effort; better by warmth and rest. - Kalmia lat. – Descending type of pain, pain with palpitation of heart and slow pulse - Guaiacum – Gouty abscesses of joints, pain relieved by cold bath and cold application. - Calcarea carb. – Arthritic swelling, knee pain especially in fleshy people, effusion of knee joint, which is worse by cold. - Benzoic acid – Gouty concretions of joints, knee pain due to abnormal deposition of uric acid - Hypericum – Remedy for rheumatoid arthritis, knee contracted, has outstanding action over nerve pain. - Lachesis – Rheumatic pain along with swelling. Sciatica of right leg. Warmth aggravates in general, intolerance of tight clothing. - Lycopodium – Sciatica pain in left leg. - Colocynth – Pain that radiates from hip to calf, esp in left leg, is accompanied by numbness and is better by warmth.
<urn:uuid:3a98590c-50b8-478a-8f31-c567282a787e>
seed
The Gastrointestinal Tract You may not have thought of it like this before but the gastrointestinal tract is technically outside the body. It is basically a long tube with one opening at the mouth and another at the anus. Just like the skin protects the body from the external environment, so too does the GI tract, with respect to everything that is ingested. The main functions of the GI tract are as follows: - Digests foods - Absorbs the products of digestion so they can be converted into energy and proteins. - Carries nutrients like vitamins and minerals across the intestinal lining, into the bloodstream. - Contains a major part of the chemical detoxification system of the body. - Contains antibodies that protect the body against infection. The healthy gastrointestinal tract absorbs only the small molecules like those that are product of complete digestion. These molecules are the amino acids, simple sugars, fatty acids, vitamins, and minerals that the body requires for all the processes of life to function properly. The intestines, small intestine in particular, only allow these substances to enter the body due to the fact that the cells that make up the intestinal wall are tightly packed together. The intestines also contain special proteins called 'carrier proteins' that are responsible for binding to certain nutrients and transporting them through the intestinal wall and into the bloodstream. So what is a Leaky Gut Syndrome? Leaky Gut Syndrome (LGS) is the name given to a condition in which the ability of the intestinal wall to keep out large and undesirable molecules is reduced. Hence the name, as substances that are normally kept outside the body and within the intestines, are "leaking" across the intestinal wall and into the body as a whole. This happens when the spaces between the cells of the intestinal wall become enlarged for various reasons. Leaky Gut Syndrome is hardly ever tested for or diagnosed by doctors in general practice but there are vast amounts of research implicating altered permeability of the intestinal wall in a large number of illnesses. To illustrate this, the following definition for leaky gut syndrome is taken from Allergy Induced Autism, a UK based Autism charity. Leaky Gut Syndrome is "an increase in permeability of the intestinal mucosa to luminal macromolecules, antigens and toxins associated with inflammatory degenerative and/or atrophic mucosal damage". To simplify a few of these terms that may be unfamiliar: Mucosa - The intestinal wall Lumina - The space within the walls of the intestine Macromolecules - Large molecules What causes Leaky Gut Syndrome There are quite a large number of factors that can increase the permeability of the intestinal wall. The most common are: - Alcohol and caffeine - These irritate the gut wall. - Candida and Gut dysbiosis - Caused by antibiotic use etc. - Drugs - The worst offenders include NSAID's (non-steroidal anti-inflammatory drugs), antacids and pain med's like aspirin and ibuprofen. - A diet high in refined carbohydrate. - Environmental contaminants - Food additives - Insufficient digestive enzymes - Chronic stress - Stress reduces blood flow to the gut leaving it unable to repair itself. - Other gastrointestinal disease. - Poor liver function resulting in inflammatory toxins being excreted into the intestines in bile. Testing for Leaky Gut Syndrome The same test used by medical researchers to detect changes in intestinal permeability is now available inexpensively to consumers and can be easily completed at home. Basically the test involves providing urine samples before and after drinking a solution containing the non-metabolised sugars mannitol and lactulose. These are then mailed to the lab in a pre-paid envelope. The lab analyzes the urine samples for levels of mannitol and lactulose. Since lactulose is not normally absorbed in significant quantities an abnormally high result suggests leaky gut syndrome. Mannitol is absorbed by the intestinal cells to some degree so a low result may indicate malabsorption but a high result again suggests leaky gut syndrome, especially when combined with high lactulose recovery. For more information see the Intestinal Permeability (IP) Test review page. What happens when you have a Leaky Gut Syndrome? A leaky gut results in many problems that affect the whole body: Gastrointestinal Symptoms - The most obvious problems resulting from a leaky gut are probably digestive symptoms like bloating, flatulence and abdominal discomfort. Large food particles can pass into the bloodstream - The immune system assumes these particles are dangerous foreign material and creates antibodies against them. This leads to the situation where large numbers of different foods set of an immune reaction every time they are eaten. These antibodies may also attack the bodies own cells that are structurally similar to the large food molecules. This leads to auto-immune disease. Nutritional deficiencies - Although you might expect that having a leaky gut would mean you can absorb more nutrients, this is not the case. This is because the 'carrier proteins' that were mentioned earlier, are damaged when the gut becomes inflamed and more permeable. This means that the nutrients can't get across the intestinal wall and nutritional deficiencies result. Increased absorption of toxins - This places a great strain on the liver and as detoxification enzymes become depleted, more and more toxins are able to circulate to all parts of the body in the bloodstream. In severe cases leaky gut can lead to liver inflammation and toxic hepatitis. These toxins circulating in the blood can result in any number of symptoms from foggy thinking to skin rashes as well as inflammation of various tissues and organs. Multiple Chemical Sensitivities are the end result of a high toxic load as the nervous system becomes sensitized. The gut's immune function is impaired - When the gut wall is inflamed the antibodies (IgA) that protect the gut are adversely affected. This reduces its ability to fight off potentially pathogenic bacteria, parasites and yeast like candida. Gut microorganisms can enter the body - When the intestinal lining is inflamed, the bacteria that usually reside within the intestine are able to "translocate". This means that they can pass across the gut wall and into the bloodstream, from where they can cause infection anywhere in the body, causing havoc for the immune system. As you can imagine, all of this can lead to any number of seemingly unrelated symptoms affecting every organ system in the body. Leaky Gut Syndrome has also been linked with having a causative role in a large number of distinct illnesses. Many of these are auto-immune diseases, which means the immune system attacks the body's own cells. Leaky gut syndrome plays a role in these types of illness because it increases immune reactions to food particles and then cross reactivity may occur meaning that the immune system attacks body tissues that are chemically similar to the foods to which it has become sensitized. Here are a small number of the many diseases in which leaky gut syndrome may have a role: - Rheumatoid Arthritis - Multiple Sclerosis - Crohn's Disease - Addison's Disease Leaky Gut Syndrome and Environmental Illnesses Leaky gut syndrome is likely to play a part in all of the environmental illnesses. All of these illnesses are characterized by a high frequency of allergy, symptoms brought on by chemical exposure, subclinical nutritional deficiencies and gastrointestinal symptoms. The increased toxic load on the body produced by a leaky gut has the general effect of making the nervous and immune systems hyperstimulated. Neuroimmune dysfunction, or more specifically, neuroimmune hyperactivity, is implicated in all the leading theories about the etiology of environmental illnesses like CFS and fibromyalgia. Top Leaky Gut Syndrome Articles: |Leaky Gut Syndrome - Professor Keith Scott-Mumby| |Leaky Gut Syndrome: The Intestinal Terrorist - Gloria Gilbère| |Leaky Gut Syndromes: Breaking the Vicious Cycle - Dr. Leo Galland| |Leaky Gut Syndrome: A Modern Epidemic Part I - Jake Paul Fratkin, OMD| |Leaky Gut Syndrome: A Modern Epidemic Part II - Jake Paul Fratkin, OMD| Useful Books - In association with Amazon
<urn:uuid:f259f193-f377-4ea3-b436-f4491e934470>
seed
Men who have previously been unable to conceive could now have new hope after scientists made the startling discovery. The men used in the test were unable to produce enough sperm to conceive - making them infertile but scientists were able to take skin cells and develop early stage sperm cells from them. Researchers believe the new discovery will provide a breakthrough for male infertility treatments. Lead researcher Dr Reijo Pera, from the Institute for Stem Cell Biology and Regenerative Medicine at Stanford University, US, said: "Our results are the first to offer an experimental model to study sperm development. Therefore, there is potential for applications to cell-based therapies in the clinic, for example, for the generation of higher quality and numbers of sperm in a dish. "It might even be possible to transplant stem-cell-derived germ cells directly into the testes of men with problems producing sperm." Infertility affects 10% to 15% of couples and genetic causes of the problem are surprisingly prevalent among men, say the scientists. The most common defect is the spontaneous loss of key genes on the male Y chromosomes, but what triggers it at the molecular level is not well understood. The three infertile men taking part in the study had missing regions of Y chromosome DNA associated with the production of few or no sperm. Fibroblast connective tissue cells from skin samples taken from the men were genetically engineered to transform them into induced pluripotent stem (iPS) cells. These are adult cells whose developmental clock has been turned back so they assume the properties of embryonic stem cells, including the ability to grow into virtually any kind of body tissue. Sabotaged by the Y chromosome genetic defect, the iPS cells struggled to form sperm in a laboratory dish. But after being transplanted into the testes of mice, they turned into sperm cell precursors - albeit fewer than the number produced by "healthy" iPS cells. The findings, published in the journal Cell Reports, indicate that Y chromosome infertility occurs relatively late in the maturing process of sperm cells. Dr Reijo added: "Our studies suggest that the use of stem cells can serve as a starting material for diagnosing germ cell defects and potentially generating germ cells. "This approach has great potential for treatment of individuals who have genetic/idiopathic (unknown) causes for sperm loss or for cancer survivors who have lost sperm production due to gonadotoxic treatments." US scientists from the University of Pittsburgh showed in 2012 that it was possible to generate sperm cell precursors from human iPS cells, raising the prospect of restoring a man's fertility. However, they did not start out with adult skin cells from genetically infertile men.
<urn:uuid:d6fe2afb-bbef-42e6-8f2d-5c1d4b6c426c>
seed
Heart failure, also called congestive heart failure, is a condition in which the heart cannot pump enough oxygenated blood to meet the needs of the body's other organs. The heart keeps pumping, but not as efficiently as a healthy heart. Usually, the loss in the heart's pumping action is a symptom of an underlying heart problem. Heart failure may result from any/all of the following: - heart valve disease - caused by past rheumatic fever or other infections - high blood pressure (hypertension) - infections of the heart valves and/or heart muscle (i.e., endocarditis) - previous heart attack(s) (myocardial infarction) - scar tissue from previous attacks may interfere with the heart muscle's ability to work normally - coronary artery disease - narrowed arteries that supply blood to the heart muscle - cardiomyopathy - or another primary disease of the heart muscle - congenital heart disease/defects (present at birth) - cardiac arrhythmias (irregular heartbeats) - chronic lung disease and pulmonary embolism - drug-induced heart failure - excessive sodium (salt) intake - hemorrhage and anemia Heart failure interferes with the kidney's normal function of eliminating excess sodium and waste from the body. In congestive heart failure, the body retains more fluid - resulting in swelling of the ankles and legs. Fluid also collects in the lungs - resulting in shortness of breath. The following are the most common symptoms of heart failure. However, each individual may experience symptoms differently. Symptoms may include: - shortness of breath during rest, exercise, or lying flat - weight gain - visible swelling of the legs and ankles (due to a build-up of fluid), and, occasionally, the abdomen - fatigue and weakness - loss of appetite and nausea - persistent cough - often produces mucus or blood-tinged sputum - reduced urination The severity of the condition and symptoms depends on how much of the heart's pumping capacity has been lost. The symptoms of heart failure may resemble other conditions or medical problems. Always consult your physician for a diagnosis. In addition to a complete medical history and physical examination, diagnostic procedures for heart failure may include any, or a combination of, the following: - chest x-ray - a diagnostic test which uses invisible electromagnetic energy beams to produce images of internal tissues, bones, and organs onto film. - echocardiogram (Also called echo.) - a noninvasive test that uses sound waves to produce a study of the motion of the heart's chambers and valves. The echo sound waves create an image on the monitor as an ultrasound transducer is passed over the heart. - electrocardiogram (ECG or EKG) - a test that records the electrical activity of the heart, shows abnormal rhythms (arrhythmias or dysrhythmias), and detects heart muscle damage. - BNP testing - B-type natriuretic peptide (BNP) is a hormone released from the ventricles in response to increased wall tension (stress) that occurs with heart failure. BNP levels rise as wall stress increases. BNP levels are useful in the rapid evaluation of heart failure. Specific treatment for heart failure will be determined by your physician based on: - your age, overall health, and medical history - extent of the disease - your tolerance for specific medications, procedures, or therapies - expectations for the course of the disease - your opinion or preference The cause of the heart failure will dictate the treatment protocol established. If the heart failure is caused by a valve disorder, then surgery is usually performed. If the heart failure is caused by a disease, such as anemia, then the disease is treated. And, although there is no cure for heart failure due to a damaged heart muscle, many forms of treatment have proven to be successful. The goal of treatment is to improve a person's quality of life by making the appropriate lifestyle changes and implementing drug therapy. Treatment of heart failure may include: - controlling risk factors - losing weight (if overweight) - restricting salt and fat from the diet - stop smoking - abstaining from alcohol - proper rest - controlling blood sugar if diabetic - limiting fluids - medication, such as: - angiotensin converting enzyme (ACE) inhibitors - to decrease the pressure inside the blood vessels, or angiotensin II receptor blockers if ACE inhibitors are not tolerated - diuretics - to reduce the amount of fluid in the body - vasodilators - to dilate the blood vessels and reduce workload on the heart - digitalis - to increase heart strength and control rhythm problems - inotropes - increase the pumping action of the heart - antiarrhythmia medications - keep the rhythm regular and prevent sudden cardiac death - beta-blockers - reduce the heart's tendency to beat faster by blocking specific receptors on the cells that make up the heart - aldosterone blockers - block the effects of aldosterone which causes sodium and water retention - biventricular pacing/cardiac resynchronization therapy - a new type of pacemaker that paces both sides of the heart simultaneously to coordinate contractions and improve pumping ability. Heart failure patients are potential candidates for this therapy - implantable cardioverter defibrillator - a device similar to a pacemaker that senses when the heart is beating too fast and delivers an electrical shock to convert the fast rhythm to a normal rhythm - heart transplantation - ventricular assist devices (VADs). These are mechanical devices used to take over the pumping function for one or both of the heart's ventricles, or pumping chambers. A VAD may be necessary when heart failure progresses to the point that medications and other treatments are no longer effective. Click here to view the Online Resources of Cardiovascular Disease
<urn:uuid:5cdb154d-f144-46c2-be1c-1bee36e3354f>
seed
Illustrated Encyclopedia of Human Anatomic Variation: Opus V: Skeletal Systems: Upper Limb Ronald A. Bergman, PhD Adel K. Afifi, MD, MS Ryosuke Miyauchi, MD Peer Review Status: Internally Peer Reviewed The scapular notch is frequently bridged by bone rather than a ligament (5% of cases studied), converting it into a foramen which is normal in some animals. Accessory notches may be present; one frequently is found on the inferior angle. The acromion may fail to unite. In about 5% of individuals (more commonly males), the separate part (os acromion) is on the right side. Fascicles of the subclavius muscle may be inserted onto the coracoid process by passing through the clavipectoral fascia. The tendon of m. pectoralis minor, in part (15%) or entirely (1%) , may pass over the coracoid process to insert elsewhere. As a result, it may produce a characteristic groove on the superior part of the process. A bursa intervening between the tendon and bone may communicate with the shoulder joint (Seib). The coracoid process may exist as a separate bone. Sulcus for Circumflex Scapular Artery One hundred sixty-seven (14.5%) of a total of 1,152 scapulae from the dissecting room exhibited no sulcus for the circumflex scapular artery. Contrasted with the observations of Kajava who found the sulcus was absent more often than it was present in Finns. In 130 scapulae he found it present 47 times on the left side, and 34 times on the right. Vallois reported the incidence of the vascular sulcus was somewhat higher. In French scapulae he found it in 64% on the right side and 61.7% on the left side--the absence between 35 to 40%. Considered to be rare, Gruber (1871, 1877) described only three cases. In 180 scapulae of known age and sex, Vallois found only one produced by fracture. In his collection of 795, he found seven scapulae with foramina. Gray studied 1,151 scapulae with 28 possessing foramina. Three of 87 Indian scapulae had foramina. Gruber (1864,1872) reported that sometimes there is a protuberance located on the scapula related to the "bursa mucosa anguli superioris scapulae". Fontan described a pair of scapulae, which possessed two costal facts each. They articulated with the posterior surfaces of the third and seventh ribs, respectively, and had associated with them both a capsule and synovial membrane. Kajava reported four instances of costal facets. Vallois found 13 costal facets in 180 (7.2%) scapulae. Gray found costal facets in 64 of 1,152 (5.55%) scapulae. Only two of 87 Indian scapulae had costal facets at the superior angle of the left scapula. The conversion of the suprascapular notch into a foramen as a result of the ossification of the supraspinous ligament was found in three of 60 (5%) scapulae by Poirier and Chasrpy. In 133 Finnish scapulae studied by Kajava the foramen was present only twice (1.5%). Vallois found the foramina to occur 13 times in 200 (6.5%) scapulae of Frenchmen. In a second study Vallois reported that Italian scapulae had foramina in 6.1% and in a series of scapulae from various sources the incidence varied from 0% to 3.3%. Gray found foramen in 73 of 1,151 scapulae (6.34%). No suprascapular foramina were found in 87 Indian scapulae. Shape of Acromion The shapes of the acromial processes have been classified (Macalister 1893) as to whether they were falciform, triangular, quadrangular, or intermediate in form. Of 1,080 scapulae 507 (46.9%) possessed acromions which could be classified as falciform. Five hundred forty-four of the 1,080 were left scapulae and among these the falciform type of acromion was found 277 times. The 536 right scapulae had falciform acromions 230 times. These percentages are higher than those found by Kajava who found the falciform type to occur in 5.7% of 121 scapulae, and of Vallois who reported 14% of 157 French scapulae to have acromial processes of falciform shape. Three hundred thirty-four of the 1,080 acromial processes were triangular. Of these, 137 were on the left side and 167 were on the right side. Gray's figures are again somewhat higher than those of Kajava (8.3%) and of Vallois (19.7%). Triangular acromial processes occurred in both bones of pairs 58 times from a total of 334 paired scapulae. Among the 1,080 scapulae Gray found 214 (19.8%) acromial processes that he classified as quadrangular. One hundred five of the 544 left acromions were of this type, as were 109 of the 536 from the right side. These results of Gray do not correspond to those of Kajava who found quadrangular acromions in 55.8% or of those of Vallois, 26.1%. An intermediate or non-characteristic shape of the acromial processes was evident in 55 (5.1%) of 1,080 scapulae. Both Kajava and Vallois assigned an intermediate or non-characteristic shape to a higher number (percentage) of acromions; 30.6% and 40.1% respectively. All of the 80 Indian scapulae Gray examined had acromial processes that could be classified. There were 30 (30.5%) falciform, 49 (61.2%) and 1 (1.25%) was quadrangular. Separate Acromial Bones Separate acromial bones have been reported by Gruber (1863), LaGrange (1882), Poirier (1887), Struthers (1895, 1896), Neumann (1918) and Gray, (1942). These were descriptive studies. Symington (1900) found 5 separate acrominal bones in 40 subjects (6.25%). Vallois (1925) studied 235 scapulae and found separate acromial processes in 2.1%. In 1932, Vallois reported separate acromial processes occurring 19 times in 681 scapulae (2.7%). Gray (1942) reported 36 separate acromial bones in 1,086 scapulae. He also reported agreement with Vallois that separate acromial processes occur twice as frequently unilaterally as bilaterally. Gray also reported that 83 Indian scapulae had 3 unattached acromial processes; two 40 right scapulae and once in 43 left scapulae. Facets on Interior Surface of Acromion Gray (1942) reported 240 (22.12%) scapular facets in 1,085 specimens. One hundred-eleven facets from 547 specimens were on left scapulae and 129 factes were of 538 right scapulae. Of 334 pairs, 29 left and 40 right scapulae showed facets on one side alone. Forty-six pairs showed facets on both the left and right sides. The facets on 22 pairs were identical with each other in size and other characteristics. Among 80 Indian scapulae acromial facets were found 5 times. They occurred twice in 38 right scapulae and three times in 42 left scapulae. Shape of the Glenoid Fossa The shapes of the glenoid fossae were classified on the basis of whether they were pyriform, round, oval or unclassifiable. The largest number of scapulae examined exhibited pyriform glenoid fossae (1,062/1,149--92.4%). One 5/1,149 (4%) glenoid fossae were found. In 78 of 1,149 (6.8%) scapulae the glenoid fossae were oval in form. Only 4 of 1,149 (0.3%) could not be classfied as pyriform, round or oval. Gray (1942) reported that the glenoid fossae were pyriform in 86 of 87 Indian scapulae. One left scapula showed on oval type. Notchof Glenoid Lip Kajava (1924) found the notch of the glenoid lip to be absent in 10.3% of 117 scapulae of Finns. In 180 French scapulae, Vallois (1932) reported the notch present in all but 7. Gray (1942) found the notch absent from the glenoid lip 265 times in 1,150 scapulae (23.04%) from the dissecting room. He also reported that among the 87 Indian scapulae, 15 showed no notch of the glenoid lip. Gray states that according to Graves (1910) classification of the vertebral border, that among 1,151 scapulae 706 were convex, 239 were straight and 114 were concave, and two were unclassifiable. Among 580 left scapulae the convex vertebral border was present 363 times, the straight 162 times, and the concave 54 times. Among 87 Indian scapulae the convex vertebral border was noted 77 times, the straight 9 times, and the concave once. A plate of bone extending from the medial margin of the scapula to the vertebral column has been reported (Ingersoll). Incidences in per cent among paired scapulae from dissecting room. Characteristic Common to both scapulae Left scapulae only Right scapulae only Absence of muscular cristae 1.1 1.9 0.5 Absence of sulcus for circumflex scapular artery 5.7 6.5 9.2 Anomalous scapular foramina .3 1.9 2.7 Presence of costal facets 2.2 5.4 3.0 Presence of suprascapular foramina 3.5 4.6 3.2 Schap of acromial process Falciform 33.8 18.3 10.2 Triangular 17.4 6.6 13.2 Quadrangular 11.7 8.1 7.5 Intermediate 2.4 3.0 3.6 Separate acromial bones 1.2 0.9 1.5 Facets on interior surface of acromion 13.8 8.7 12.0 Shape of glenoid fossa Pyriform 88.1 2.4 3.2 Round 0.0 0.8 0.3 Oval 2.2 3.0 5.1 Unclassifiable 0.3 0.0 0.0 Absence of notch of glenoid lip 11.1 9.5 15.7 Reactions at margins of glenoid fossae 20.3 8.7 9.8 Common to both scapulae Left scapulae only Right scapulae only Absence of muscular cristae Absence of sulcus for circumflex scapular artery Anomalous scapular foramina Presence of costal facets Presence of suprascapular foramina Schap of acromial process Separate acromial bones Facets on interior surface of acromion Shape of glenoid fossa Absence of notch of glenoid lip Reactions at margins of glenoid fossae Ossification of Acromion Process Various Types of Acromion Process Blanchard, M. (1888) Anomalie vertébrale. Soc. Biol. Comptes Rendus Hebdomadaires des Séances et Mémoires 40:772-773. Fischer, H. (1927) Quelques considérations sur la morphologie de l'omoplate. Echancrure coracoidienne transformée en un canal par un pont osseux (origine congénitale). Assoc. Anatomistes Comptes Rendus 22:95-98. Fontan, C. (1912) Articulations scapulocostales. Bull. Soc. Anat., Paris. 48:182-192. Graves, W.W. (1910) The scaphoid scapula: A frequent anomaly in development of heredity, clinical and functional significance. Med. Rec. 78:861-873. Graves, W.W. (1921) The types of scapulae. Am. J. Phys. Anthropol. 4:111-128. Graves, W.W. (1922) Age changes in the scapula. Am. J. Phys. Anthrop. 5:21-34. Graves, W.W. (1924) The relation of scapular types to problems of human heredity, longevity, morbidity and adaptability in general. Arch. Int. Med. 34:1-26. Gray, D.J. (1942) Variations in the human scapulae. Am. J. Phys. Anthropol. 29:57-72. Gruber, W. (1863) Über die Arten der Akromialknochen und accidentellen Akromialgelenke. ARch. f. Anat. Physiol. u. Wissen. Med. 1863:373-393. Gruber, W. (1864) Die Bursae mucosae der inneren Achselhölenwand. Arch. f. Anat. Physiol. u. Wissen Med. 1864:358-366. Gruber, W. (1871) Über ein congenitales Loch im unteren Schulterblattwinkel über dessen Epiphyse. Arch. Anat. Physiol. Wissen Med. 1871:300-304. Gruber, W. (1872) Über einen fortsatzartigen, cylindrischen Höcker an der Vorderfläsche des Angulus superior der Scapula. Arch. Pathol. Anat. Physiol. Klin. Med. 56:425-426. Gruber, W. (1877) Zwei Scapulae mit je einem congenitalen Loche und eine Scapula mit einem congenitalen Fortsatze von zwei männlichen Skeletten. Arch. Pathol. Anat. Physiol. Klin. Med. 69:387-391. Günsel, E. (1953) Ein grosser Processus styloideus an der Lendenwirbelsäule. Fortschr. Röntgenstr. 79:245-246. Hrdlicka, A. (1942) The scapula: Visual observations. Am. J. Phys. Anthropol. 29:73-94. Hrdlicka, A. (1942) The adult scapula. Additional observations and measurements. Am. J. Phys. Anthropol. 29:363-415. Ingersoll, R.E. (1945) Congenital elevation of the scapulae with bilateral omovertebral bones. New York J. Med. 45:1462-1463. Kajava, Y. (1924) Über den Schultergürtel der Finnen. Ann. Acad. Sci. Fenn, Series A. 21(5):1-69. Kuhns, J.G. (1945) Variations in the vertebral border of the scapula: Their relation to muscular function. Physiotherapy Res. 25:207-210. LaGrange, -. (1882) Anomalie dans le squelette de l'épaule droite. Ossification independante de l'acromion. Bulletins et Mem. de la Société Anatomique de Paris LVII(6):339-340. Macalister, A. (1893) Notes on the acromion. J. Anat. Physiol. 27:245-251. Miessen, E. (1936-37). Ein Fall von doppelseitiger Gelenkbildung zwischen Clavicula und Processus coracoides. Anat. Anz. 83:392-394. Neumann, W. (1917-18) Über das "Os acromiale." Fortschr. Röntgenstr. 25:180-191. De Neureiter, F. (1924) Contributions a l'étude de l'omoplate scaphoïde. Soc. Biol. Comptes Rendus Hebdomadaires des Séances et Mémoires. 90:1123-1124. Olivier, G. and R. Raou. (1952) La facette sous-acromiale. Assoc. Anatomistes Comptes Rendus 39:747-750. Owen, F. (1953) Bilateral glenoid hypoplasia. Report of five cases. J. Bone joint Surg. (Br.) 35:262-267. Poirer, P. (1887) Os acromial. Bull. Soc. Anat., Paris. 62:881-882. Poirer, P. and A. charpy. (1911) Traité d'Anatomie Humaine. 3rd ed. Paris. Ravelli, A. (1956) Persistierende Apophyse am Proc. coracoides. Fortschr. Röntgenstr. 84:500-502. Schär, W. and C. Zweifel. (1936) Os acromiale und seine klinische Bedeutung. Beitr. Klin. Chir. 164:101. Schlyvitch, B. (1937-38) Über den Articulus coracoclavicularis. Anat. Anz. 85:89-93. Seib, G.A. (1938) The m. pectoralis minor in American Whites and American Negroes. Am. J. Phys. Anthropol. 23:389-419. Struthers, J. (1895-1896) On separate acromion process simulating fracture. Edinburgh Med. J. 41:900-908, 1088-1104; 42:97-114, 289-297. Symington, J. (1899) Separate acromion process. J. Anat. Physiol. 34:287-294. Vallois, H.V. (1925) L'os acromial dans les races humaine. L'Anthropologie, Paris. 35:977-122. Vallois, H.V. (1926) Variations de la cavité glenoïde de l'omoplate. Soc. de Biol., Comptes Rendus Hebdomadaires des Séances et mémoires. 94:559-560. Vallois, H.V. (1926 a) Les anomalies de l'omoplate chez l'homme. Bull. Soc. Anthrop., Paris. 7:20-37. Vallois, H.V. (1926 b) Variations de l'echancrure coracoidienne de l'omoplate. Ann. Anat. Pathol. 3:411-413. Vallois, H.V. (1932) L'omoplate humaine. Bull. Soc. Anthrop., Paris. 3:3-153. Vallois, H.M. (1932) L'omoplate humaine. Bull. et Mém. de la Soc. d'Anthrop. de Paris. 7:16-100. Vallois, H.V. (1946) L'omoplate humaine. étude anatomique et anthropologique. Bull. et Mém. de la Soc. d'Anthrop. de Paris. Section Top | Title Page Please send us comments by filling out our Comment Form. All contents copyright © 1995-2014 the Author(s) and Michael P. D'Alessandro, M.D. All rights reserved. "Anatomy Atlases", the Anatomy Atlases logo, and "A digital library of anatomy information" are all Trademarks of Michael P. D'Alessandro, M.D. Anatomy Atlases is funded in whole by Michael P. D'Alessandro, M.D. Advertising is not accepted. Your personal information remains confidential and is not sold, leased, or given to any third party be they reliable or not. The information contained in Anatomy Atlases is not a substitute for the medical care and advice of your physician. There may be variations in treatment that your physician may recommend based on individual facts and circumstances. This site complies with the HONcode standard for trustworthy health information:
<urn:uuid:ff4712d5-68e6-47d6-a79b-13225fc8c401>
seed
A study published on the British Medical Journal website adds to the evidence that certain non-oral hormonal contraceptives (e.g. skin patches, implants and vaginal rings) carry a higher risk of serious blood clots (known as venous thromboembolism) than others. The findings suggest that some women should switch from a non-oral product to a contraceptive pill to help reduce their risk. Several studies have assessed the risk of venous thrombosis (a collective term for deep vein thrombosis and pulmonary embolism) in women using oral contraceptive pills, but few studies have assessed the risk in users of non-oral hormonal contraceptives. These products more continuously release hormones into the body to prevent pregnancy. A team, led by Professor Ψjvind Lidegaard at the University of Copenhagen, reviewed data on non-oral hormonal contraceptive use and first ever venous thrombosis in all Danish non-pregnant women aged between 15 and 49 years from 2001 to 2010. All the women had no record of either blood clots or cancer before the study began. Several factors that could affect the results, including age and education level, were taken into account. The results are based on 9,429,128 observation years during which 3,434 confirmed diagnoses of first ever venous thrombosis were recorded. The risk of venous thrombosis among women who did not use any type of hormonal contraception and who were 15-49 years old was on average two events per 10,000 exposure years. Women taking a combined oral contraceptive pill containing the hormone levonorgestrel had a three times increased risk (6.2 events per 10,000 exposure years). Compared with non-users of the same age, women who used a skin patch had an eight times increased risk (9.7 events per 10,000 exposure years), while women who used a vaginal ring had a 6.5 times increased risk (7.8 events per 10,000 exposure years). Use of a progestogen-only subcutaneous implant carried a slightly increased risk, while use of a progestogen-only intrauterine device did not confer any risk, and may even have a protective effect, say the authors. Unlike combined pills, no reduction in risk was seen with long-term use of a patch or a vaginal ring. Based on these findings, the authors calculated that 2,000 women using a vaginal ring and 1,250 women using a skin patch should shift to a combined pill containing levonorgestrel to prevent one event of venous thrombosis in one year. - O. Lidegaard, L. H. Nielsen, C. W. Skovlund, E. Lokkegaard. Venous thrombosis in users of non-oral hormonal contraception: follow-up study, Denmark 2001-10. BMJ, 2012; 344 (may10 3): e2990 DOI: 10.1136/bmj.e2990 Cite This Page:
<urn:uuid:d017e704-c508-46dc-baef-d45d927e5fe4>
seed
Genomes and Genes Summary: A plant genus of the family ORCHIDACEAE that contains dihydroayapin (COUMARINS) and phenanthraquinones. Publications173 found, 100 shown here - [rDNA its sequencing of Herba Dendrobii (Huangcao)]H Xu Department of Pharmacognosy, China Pharmaceutical University, Nanjing 210038, China Yao Xue Xue Bao 36:777-83. 2001..the utility of ITS sequences in molecular authentication of Herba Dendrobii (Huangcao) and phylogenetic of Dendrobium. METHODS: The ITS gene fragment was amplified using a pair of primers... - [Suspension culture of protocorm-like bodies from the endangered medicinal plant Dendrobium huoshanenese]Jiang ping Luo College of Biotechnology and Food Engineering, Hefei University of Technology, Hefei 230009, Anhui, China Zhongguo Zhong Yao Za Zhi 28:611-4. 2003To investigate the characteristics of growth, and water-soluble polysaccharide and total alkaloid accumulation in protocom-like bodies (PLBs) of Dendrobium huoshanenese in liquid culture system. - [Descriptions and microscopy identification of Ephemerantha fimbriatum]Xiaobin Guan Zhuhai Hospital, Guangdong Hospital of TCM, Zhuhai 519015 Zhong Yao Cai 27:636-8. 2004..In this paper, the descriptions and microscopy characters of Ephemerantha fimbriatum were reported... - [Effects of four species of endophytic fungi on the growth and polysaccharide and alkaloid contents of Dendrobium nobile]Xiao mei Chen Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, Bejing 100094, China Zhongguo Zhong Yao Za Zhi 30:253-7. 2005To study the effects of four species of endophytic fungi on the growth and polysaccharide and alkaloid contents of cultured Dendrobium nobile. - [A comparison of tissue formation and the content of polysaccharide between wild and cultured Dendrobium candidum]Jun an Fan Chongqing University of Medical Science, Chongqing 400016, China Zhongguo Zhong Yao Za Zhi 30:1648-50, 1659. 2005To compare the tissue formation and the content of polysaccharide between the wild Dendrobium candidum and the cultured ones and to find any existed differences. - Identification of Dendrobium species by a candidate DNA barcode sequence: the chloroplast psbA-trnH intergenic regionHui Yao Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences, Beijing 100193, PR China Planta Med 75:667-9. 2009..In this study, we hypothesize that the psbA-trnH spacer regions are also effective barcodes for Dendrobium species... - Floral organ identity genes in the orchid Dendrobium crumenatumYifeng Xu Department of Biological Sciences, Faculty of Science, National University of Singapore, 10 Science Drive 4, 117543 Singapore Plant J 46:54-68. 2006..mechanisms underlying orchid flower development, we isolated candidates for A, B, C, D and E function genes from Dendrobium crumenatum. These include AP2-, PI/GLO-, AP3/DEF-, AG- and SEP-like genes... - Structure and bioactivity of the polysaccharides in medicinal plant Dendrobium huoshanenseYves S Y Hsieh The Genomics Research Center, Academia Sinica, Taipei 115, Taiwan Bioorg Med Chem 16:6054-68. 2008Detailed structures of the active polysaccharides extracted from the leaf and stem cell walls and mucilage of Dendrobium huoshanense are determined by using various techniques, including chromatographic, spectroscopic, chemical, and .. - Phenanthrenes from Dendrobium nobile and their inhibition of the LPS-induced production of nitric oxide in macrophage RAW 264.7 cellsJi Sang Hwang College of Pharmacy, Chungbuk National University, Cheongju, Republic of Korea Bioorg Med Chem Lett 20:3785-7. 2010Bioactivity-guided isolation of the methanol extract of the stems of Dendrobium nobile yielded a new phenanthrene together with nine known phenanthrenes and three known bibenzyls... - Antimicrobial activity and biodiversity of endophytic fungi in Dendrobium devonianum and Dendrobium thyrsiflorum from VietnamYong mei Xing Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, No 151, Malianwa North Road, Haidian District, Beijing 100193, People s Republic of China Curr Microbiol 62:1218-24. 2011Endophytic fungi are rich in orchids and have great impacts on their host plants. 53 endophytes (30 isolates from Dendrobium devonianum and 23 endophytic fungi from D... - Bioactive bibenzyl derivatives and fluorenones from Dendrobium nobileXue Zhang Research Institute of Tsinghua University in Shenzhen, Key Laboratory for New Drugs Research of Traditional Chinese Medicine in Shenzhen, Shenzhen 518057, People s Republic of China J Nat Prod 70:24-8. 2007Bioassay-guided fractionation of the 60% ethanol extract of the stems of Dendrobium nobile using the DPPH assay led to the isolation of two new bibenzyl derivatives, nobilin D (1) and nobilin E (2), and a new fluorenone, nobilone (3), .. - Inhibitory effects of Dendrobium alkaloids on memory impairment induced by lipopolysaccharide in ratsYanfei Li Department of Pharmacology, Zunyi Medical College, Zunyi, PR China Planta Med 77:117-21. 2011b>Dendrobium alkaloids (DNLA), extracted from Dendrobium nobile Lindl. whose botanical name is Dendrobium moniliforme, Orchidaceae family, were studied for their effect on lipopolysaccharide (LPS)-induced memory impairment in rats... - Denbinobin, a phenanthrene from dendrobium nobile, inhibits invasion and induces apoptosis in SNU-484 human gastric cancer cellsJae In Song College of Pharmacy, Duksung Women s University, 419 Ssangmun dong, Seoul 132 714, Republic of Korea Oncol Rep 27:813-8. 2012b>Dendrobium nobile is widely used as an analgesic, an antipyretic, and a tonic to nourish the stomach in traditional medicine... - Isolation and characterization of chalcone synthase gene isolated from Dendrobium Sonia EarsakulW Pitakdantham Center for Agricultural Biotechnology CAB, Kasetsart University, Kamphang Saen Campus, Nakhon Pathom 73140, Thailand Pak J Biol Sci 13:1000-5. 2010..isolate and characterize chalcone synthase gene in anthocyanin biosynthetic pathway during flower development of Dendrobium Sonia Earsakul... - Comparative molecular cytogenetics of major repetitive sequence families of three Dendrobium species (Orchidaceae) from BangladeshRabeya Begum Department of Botany, University of Dhaka, Dhaka 1000, Bangladesh Ann Bot 104:863-72. 2009b>Dendrobium species show tremendous morphological diversity and have broad geographical distribution... - Conversion of protocorm-like bodies of Dendrobium huoshanense to shoots: the role of polyamines in relation to the ratio of total cytokinins and indole-3-acetic acidYing Wang School of Biotechnology and Food Engineering, Hefei University of Technology, Hefei 230009, People s Republic of China College of Life Science, Hebei University, Baoding 071002, People s Republic of China J Plant Physiol 166:2013-22. 2009In the present paper, a correlation between enhanced conversion of protocorm-like bodies (PLBs) of Dendrobium huoshanense to shoots by free polyamines (PAs) and changes in the levels of endogenous hormones is described... - Interaction between a dark septate endophytic isolate from Dendrobium sp. and roots of D. nobile seedlingsXiao Qiang Hou Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100094, China J Integr Plant Biol 51:374-81. 2009Interactions between an isolate of dark septate endophytes (DSE) and roots of Dendrobium nobile Lindl. seedlings are reported in this paper. The isolate was obtained from orchid mycorrhizas on Dendrobium sp. in subtropical forest... - Development and characterization of 110 novel EST-SSR markers for Dendrobium officinale (Orchidaceae)Jiang Jie Lu Zhejiang Provincial Key Laboratory for Genetic Improvement and Quality Control of Medicinal Plants, Hangzhou Normal University, Hangzhou 310018, People s Republic of China Am J Bot 99:e415-20. 2012.... - Anti-metastatic activities of bibenzyls from Dendrobium pulchellumPithi Chanvorachote Faculty of Pharmaceutical Sciences, Chulalongkorn University, Bangkok 10330, Thailand Nat Prod Commun 8:115-8. 2013Our investigation of the stem of Dendrobium pulchellum resulted in the isolation of four known bibenzyls, chrysotobibenzyl (1), chrysotoxine (2), crepidatin (3) and moscatilin (4)... - Identification of Dendrobium species by dot blot hybridization assayHong Xu The MOE Key Laboratory for Standardization of Chinese Medicines and The SATCM Key Laboratory for New Resources and Quality Evaluation of Chinese Medicines, Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, China Biol Pharm Bull 33:665-8. 2010A method was developed for rapid identification of Dendrobium species by a dot blot hybridization assay... - Dendrobium candidum extract increases the expression of aquaporin-5 in labial glands from patients with Sjögren's syndromeLin Xiao Department of Oral and Maxillofacial Surgery, Central Hospital of Fuling, Chongqing, China Phytomedicine 18:194-8. 2011This study aimed to investigate the mechanism of Dendrobium candidum extract in promoting expression of aquaporin-5 for treatment of Sjögren's syndrome (SS)... - Thermostable mannose-binding lectin from Dendrobium findleyanum with activities dependent on sulfhydryl contentRunglawan Sudmoon Department of Biochemistry, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand Acta Biochim Biophys Sin (Shanghai) 40:811-8. 2008A mannose-binding lectin was purified from Dendrobium (D.) findleyanum pseudobulb using mannan-agarose column chromatography... - In vitro and in vivo antioxidant activity of a water-soluble polysaccharide from Dendrobium denneanumAoxue Luo Department of Landscape Plants, Chengdu Campus of Sichuan Agriculture University, Chengdu 611130, China Molecules 16:1579-92. 2011The water-soluble crude polysaccharide (DDP) obtained from the aqueous extracts of the stem of Dendrobium denneanum through hot water extraction followed by ethanol precipitation, was found to have an average molecular weight (Mw) of .. - Composition analysis and antioxidant activity of polysaccharide from Dendrobium denneanumYijun Fan College of Life Sciences, Sichuan University, Chengdu 610064, PR China Int J Biol Macromol 45:169-73. 2009..polysaccharide fractions (DDP1-1, DDP2-1 and DDP3-1) were successfully purified from the crude polysaccharide of Dendrobium denneanum by DEAE-Cellulose and Sephadex G-200 column chromatography... - Accurate identification of closely related Dendrobium species with multiple species-specific gDNA probesTongxiang Li Chien Shiung WU Laboratory, Southeast University, Nanjing 210096, P R China J Biochem Biophys Methods 62:111-23. 2005About 63 species of Dendrobium are identified in China, making the identification of the origin of a particular Dendrobium species on the consumer market very difficult... - Dendrobium findleyanum agglutinin: production, localization, anti-fungal activity and gene characterizationNison Sattayasai Department of Biochemistry, Faculty of Science, Khon Kaen University, Muang, Khon Kaen, 40002, Thailand Plant Cell Rep 28:1243-52. 2009The recently reported Dendrobium findleyanum agglutinin (DFA) was identified and determined in different parts of D... - Differentiation of Dendrobium species used as "Huangcao Shihu" by rDNA ITS sequence analysisHong Xu Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai, P R China Planta Med 72:89-92. 2006The genus Dendrobium Sw... - Antioxidant and anti-hyperglycemic activity of polysaccharide isolated from Dendrobium chrysotoxum LindlYaping Zhao Research Center of Bioactive Materials, Chonbuk National University, Chonju 561 756, Korea J Biochem Mol Biol 40:670-7. 2007Although polysaccharide is believed to play an important role in the medicinal effect of Dendrobium chrysotoxum Lindl (DCL), its role as an antioxidant and in anti-hyperglycemic induction was not reported... - A new phenanthrene with a spirolactone from Dendrobium chrysanthum and its anti-inflammatory activitiesLi Yang Key Laboratory of Standardization of Chinese Medicines of Ministry of Education, Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, 1200 Cailun Road, Zhangjiang Hi Tech Park, Shanghai 201203, PR China Bioorg Med Chem 14:3496-501. 2006Investigation of phenolic patterns from the stems of Dendrobium chrysanthum by HPLC-PDA-MS has led to the isolation of a new phenanthrene derivative with a spirolactone ring, dendrochrysanene (1), that proved to suppress the mRNA level of .. - Identification of medicinal Dendrobium species by phylogenetic analyses using matK and rbcL sequencesHaruka Asahina Laboratory of Food Chemistry, Faculty of Humanities and Sciences, Ochanomizu University, 2 1 1 Ohtsuka, Bunkyo ku, Tokyo, Japan J Nat Med 64:133-8. 2010Species identification of five Dendrobium plants was conducted using phylogenetic analysis and the validity of the method was verified... - Functional characterisation of a cytokinin oxidase gene DSCKX1 in Dendrobium orchidShu Hua Yang Plant Growth and Development Laboratory, Department of Biological Sciences, Faculty of Science, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Republic of Singapore Plant Mol Biol 51:237-48. 2003..We have cloned a novel putative cytokinin oxidase, DSCKX1 (Dendrobium Sonia cytokinin oxidase), by mRNA differential display from shoot apices of Dendrobium Sonia cultured in the .. - In vitro antioxidant activities of a water-soluble polysaccharide derived from Dendrobium nobile Lindl. extractsAoxue Luo College of Life Sciences, Sichuan University, Chengdu 610064, PR China Int J Biol Macromol 45:359-63. 2009A water-soluble polysaccharide (DNP), isolated from the aqueous extracts of the stem of Dendrobium nobile Lindl., was found to have an average molecular weight (Mw) of about 8.76 x 10(4)Da... - DNA microarray for identification of the herb of dendrobium species from Chinese medicinal formulationsYan Bo Zhang Department of Biochemistry, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China Planta Med 69:1172-4. 2003A DNA microarray for detecting processed medicinal Dendrobium species (Herba Dendrobii) was constructed by incorporating the ITS1-5.8S-ITS2 sequences of 16 Dendrobium species on a glass slide... - Simultaneous determination of phenols (bibenzyl, phenanthrene, and fluorenone) in Dendrobium species by high-performance liquid chromatography with diode array detectionLi Yang Department of Pharmacognosy, China Pharmaceutical University, 1 Shennong Road, Nanjing 210038, Jiangsu Province, PR China J Chromatogr A 1104:230-7. 2006..77-104.92%). The developed method was applied to the simultaneous determination of 11 phenols from totally 31 Dendrobium species (mainly of medicinal plants) as well as other four samples from the similar genera as Pholidota, .. - Genetic diversity analysis and conservation of the endangered Chinese endemic herb Dendrobium officinale Kimura et Migo (Orchidaceae) based on AFLPXuexia Li Jiangsu Key Laboratory for Biodiversity and Biotechnology, College of Life Sciences, Nanjing Normal University, Nanjing, China Genetica 133:159-66. 2008b>Dendrobium officinale is a critically endangered perennial herb endemic to China. Determining the levels of genetic diversity and patterns of population genetic structure of this species would assist in its conservation and management... - [Study on sequence difference and SNP pheomenon of rDNA ITS region in F type and H type population of Dendrobium officinale]Xiao Yu Ding Department of Pharmacognosy, School of Chinese Pharmacy Zhongguo Zhong Yao Za Zhi 27:85-9. 2002To study rDNA ITS sequence differences between F type and that of H type of Dendrobium officinale in main habitat of China. - Cloning and transcription analysis of an AGAMOUS- and SEEDSTICK ortholog in the orchid Dendrobium thyrsiflorum (Reichb. f.)Martin Skipper Institute of Biology, University of Copenhagen, Gothersgade 140, DK 1123 Copenhagen K, Denmark Gene 366:266-74. 2006..We have cloned an AG- and STK ortholog in the orchid Dendrobium thyrsiflorum, named DthyrAG1 and DthyrAG2, respectively, and analyzed their expression patterns... - Intersimple sequence repeats (ISSR) molecular fingerprinting markers for authenticating populations of Dendrobium officinale Kimura et MigoJie Shen Jiangsu Key Laboratory for Biodiversity and Biotechnology, College of Life Sciences, Nanjing Normal University, China Biol Pharm Bull 29:420-2. 2006..sequence repeats (ISSR) molecular fingerprinting markers have been employed to authenticate eight populations of Dendrobium officinale using 10 primers selected from 76 ISSR primers... - Evaluation of different promoters driving the GFP reporter gene and selected target tissues for particle bombardment of Dendrobium Sonia 17C S Tee Department of Biochemistry and Microbiology, Faculty of Science and Environmental Studies, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia Plant Cell Rep 21:452-8. 2003..gfp) gene driven by different promoters, CaMV 35S, HBT, and Ubi1 were tested for the genetic transformation of Dendrobium Sonia 17... - Genetic diversity across natural populations of Dendrobium officinale, the endangered medicinal herb endemic to China, revealed by ISSR and RAPD markersGe Ding Jiailgsu Provicinal Key Laboratory for Biodiversity and Biotechnology, College of Life Sciences, Nanjing Normal University, Nanjing 210046, China Genetika 45:375-82. 2009b>Dendrobium officinale is a rare and endangered herb with special habitats and endemic to China... - Immunomodulatory sesquiterpene glycosides from Dendrobium nobileQinghua Ye Shanghai Institute of Materia Medica, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shinghai 200031, People s Republic of China Phytochemistry 61:885-90. 2002..glycosides with alloaromadendrane, emmotin, and picrotoxane type aglycones were isolated from the stems of Dendrobium nobile Lindl (Orchidaceae). Their structures were determined by spectroscopic methods and chemical reactions... - [Experiment on seeding induction from seed of Demdrobium candidum]Gang Du The College of Chemistry and Biotechnology, Yunnan Nationalities University, Kunming 650031, China Zhong Yao Cai 30:1207-8. 2007..28 mg/L + sucrose 20 g/L + banana mud 100 g/L + active carbon 10 g/L. The VW + NAA 0.18 mg/L + 6-BA 0.53 mg/L + GA 0.28 mg/L + sucrose 20 g/l + banana mud 100 g/L + active carbon 10 g/L was best for rooting of the plantlets... - [Genetic diversity and molecular authentication of wild populations of Dendrobium officinale by RAPD]Ge Ding Jiangsu Key Laboratory for Bioresource Technology, College of Life Sciences, Nanjing Normal University, Nanjing 210097, China Yao Xue Xue Bao 40:1028-32. 2005Genetic diversity, relationship and molecular authentication of total 8 wild populations of Dendrobium officinale were investigated using RAPD markers. - [Effect of polysaccharides content of tissue culturing seedlings on Dendrobium candidium under sound wave stimulation]Biao Li Key Laboratory for Biomechanics and Tissue Engineering under the State Ministry of Education, College of Bioengineering, Chongqing University, China Zhong Yao Cai 29:645-7. 2006To detect the polysaccharides content of tissue culturing seedlings on Dendrobium candidium under special sound wave stimulation. - Antifibrotic phenanthrenes of Dendrobium nobile stemsHyekyung Yang College of Pharmacy and Research Institute of Pharmaceutical Science, Seoul National University, Seoul, Korea J Nat Prod 70:1925-9. 2007..phenanthrenes ( 1 and 6) and four new dihydrophenanthrenes ( 2- 5) were isolated from a methanolic extract of Dendrobium nobile stems, along with 13 known phenanthrenes and dihydrophenanthrenes ( 7- 19)... - [Studies on the seed embryo germination and propagation of Dendrobium candidum in vitro]Gui xiang Tang Department of Agronomy, College of Agriculture and Biotechnology, Zhejiang University, Hangzhou 310029, China Zhongguo Zhong Yao Za Zhi 30:1583-6. 2005To determine optimum culture conditions for the seed embryo culture and rapid propagation of Dendrobium candidum. - Two new compounds from Dendrobium candidumYan Li Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China Chem Pharm Bull (Tokyo) 56:1477-9. 2008Two new compounds were isolated from the stems of Dendrobium candidum: (R)-3,4-dihydroxy-5,4',alpha-trimethoxybibenzyl (1), named dendrocandin A; and 4-[2-[(2S,3S)-3-(4-hydroxy-3,5-dimethoxyphenyl)-2-hydroxymethyl-8-methoxy-2,3-.. - SNP, ARMS and SSH authentication of medicinal Dendrobium officinale KIMURA et MIGO and application for identification of Fengdou drugsGe Ding Jiangsu Provicinal Key Laboratory for Biodiversity and Biotechnology, College of Life Sciences, Nanjing Normal University, No 1, Wenyuan Road, Nanjing 210046, China Biol Pharm Bull 31:553-7. 2008Dried stems of Dendrobium officinale have been used as crude drugs in traditional Chinese medicine (TCM) with good tonic efficacy... - [The comparison of different processing methods of Dendrobium loddigesii]Kong yuan Wu College of Chinese Materia Medica, Beijing University of Traditional Chinese Medicine, Beijing 100102, China Zhong Yao Cai 30:1067-9. 2007To study the 4 different processing methods of Dendrobium loddigesii. and find a optimal method. - Bio-guided isolation of antioxidants from the stems of Dendrobium aurantiacum var. denneanumLi Yang Key Laboratory of Standardization of Chinese Medicines of Ministry of Education, Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, 201203 Shanghai, PR China Phytother Res 21:696-8. 2007Bio-guided fractionation of the stems of Dendrobium aurantiacum var... - High-performance liquid chromatography-diode array detection/electrospray ionization mass spectrometry for the simultaneous analysis of cis-, trans- and dihydro-2-glucosyloxycinnamic acid derivatives from Dendrobium medicinal plantsLi Yang Key Laboratory of Standardization of Chinese Medicines of Ministry of Education, Shanghai University of Traditional Chinese Medicine, 1200 Cailun Road, Zhangjiang Hi Tech Park, Shanghai 201203, P R China Rapid Commun Mass Spectrom 21:1833-40. 2007..the major 2-glucosyloxycinnamic acids, cis-melilotoside, trans-melilotoside and dihydromelilotoside, present in Dendrobium medicinal plants... - [Allele-specific diagnostic PCR authentication of Dendrobium thyrsiflorum]Yi Ying Key Laboratory of Standardization of Chinese Medicines of the Ministry of Education, Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China Yao Xue Xue Bao 42:98-103. 2007..thyrsiflorum Rchb. f. Based on rDNA ITS sequences of the 164 samples from 109 Dendrobium species sequenced and quoted from GenBank, the allele-specific diagnostic primers QH-JB1 and QH-JB2 for .. - [Studies on chemical constituents in stem of Dendrobium chrysotoxum]Yan Qing Gong China Pharmaceutical University, Nanjing 210038, China Zhongguo Zhong Yao Za Zhi 31:304-6. 2006To investigate the chemical constituents of Dendrobium chrysotoxum. - Copacamphane, picrotoxane and cyclocopacamphane sesquiterpenes from Dendrobium nobileXue Zhang Key Laboratory for Research and Development of New Drugs from Traditional Chinese Medicine and Natural Products in Shenzhen, Research Institute of Tsinghua University in Shenzhen, Shenzhen, PR China Chem Pharm Bull (Tokyo) 56:854-7. 2008..4) skeletons, were isolated from the n-BuOH soluble fraction of the 60% ethanol extract of the stems of Dendrobium nobile... - [Studies on constituents of Dendrobium gratiosissimum]Min Wang Department of Pharmacognosy, China Pharmaceutical University, Nanjing 210038, China Zhongguo Zhong Yao Za Zhi 32:701-3. 2007To study the chemical constituents of Dendrobium gratiosissimum. - [Studies on population difference of Dendrobium officinale II establishment and optimization of the method of ISSR fingerprinting marker]Jie Shen Jiangsu Key Laboratory for Biodiversity and Biotechnology, College of Life Sciences, Nanjing Normal University, Nanjing 210097, China Zhongguo Zhong Yao Za Zhi 31:291-4. 2006..To establish and optimize ISSR-PCR system of Dendrobiwn officinale according to the ISSR-PCR characters of D. officinale... - A simple and convenient approach for isolating RNA from highly viscous plant tissue rich in polysaccharidesBiao Li Key Laboratory for Biomechanics and Tissue Engineering under the State Ministry of Education, College of Bioengineering, Chongqing University, Chongqing 400044, China Colloids Surf B Biointerfaces 49:101-5. 2006RNA isolation is a prerequisite to the study of gene expression of herbaceous plant Dendrobium nobile under an environmental stress... - [Comparison on growth, physiology and medicinal components of Dendrobium huoshanense hybrid and its parents]Shu Wang College of Life Sciences, Anhui Agricultural University, Hefei 230036, China Zhongguo Zhong Yao Za Zhi 31:1401-4. 2006To compare the hybrid between species of Dendrobium huoshanense and its parents on growing, physiologic indexes and content of medicinal components, and provide theoretical basis for species quality improvement. - Structural characterization of a 2-O-acetylglucomannan from Dendrobium officinale stemYun Fen Hua Lab of Plant Systematic Evolution and Biodiversity, College of Life Sciences, Zhejiang University, Hangzhou 310029, China Carbohydr Res 339:2219-24. 2004A heteropolysaccharide obtained from an aqueous extract of dried stem of Dendrobium officinale Kimura and Migo by anion-exchange chromatography and gel-permeation chromatography, was investigated by chemical techniques and NMR .. - Copacamphane, picrotoxane, and alloaromadendrane sesquiterpene glycosides and phenolic glycosides from Dendrobium moniliformeChunsheng Zhao Shanghai Institute of Materia Medica, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 201203, People s Republic of China J Nat Prod 66:1140-3. 2003..sesquiterpene aglyons along with three phenolic glycosides have been isolated from the stems of Dendrobium moniliforme... - [Database establishment of the whole rDNA ITS region of Dendrobium species of "fengdou" and authentication by analysis of their sequences]Xiao Yu Ding Department of Pharmacognosy, China Pharmaceutical University, College of Life Sciences, Nanjing Normal University, Nanjing, China Yao Xue Xue Bao 37:567-73. 2002To establish the whole rDNA ITS region sequence database of various Dendrobium species of "Fengdou" and to authenticate exactly the inspected species of "Fengdou". - [Primary study on photosynthetic characteristics of Dendrobium nobile]Wenhua Su Institute of Ecology and Geobotany, Yunnan University, Kunming 650091 Zhong Yao Cai 26:157-9. 2003With LiCor-6400 Portable Photosynthesis System, carbon dioxide exchange pattern for leaves of Dendrobium nobile during 24 hours were studied in sunny day and rainy day, and the variation of CO2 exchange rate to light intensity was .. - [Analysis of 1H-NMR fingerprint in stem of Dendrobium loddigesii]Hai Lin Qin Key Laboratory of Chemistry for Natural Products of Guizhou Province and Chinese Academy of Sciences, Guiyang 550002, Guizhou, China Zhongguo Zhong Yao Za Zhi 27:919-23. 2002To analyse the 1H-NMR finger-print of the stem of Dendrobium loddigesii. - [Studies on polysaccharide alkaloids and minerals from Dendrobium moniliforme (L.) Sw.]Y L Chen College of Life Science, Zhejiang University, Hangzhou 310012, Zhejiang, China Zhongguo Zhong Yao Za Zhi 26:709-10, 4. 2001Objective: To explore contents of active substances in different part of Dendrobium monilifrome and the quality influenced by different drying processes... - [Studies on second metabolites of an endophytic fungus (II)]Neng Jiang Yu Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100094, China Zhongguo Zhong Yao Za Zhi 27:204-6. 2002..To study the chemical constituents from the cultured mycelia of a fungus Cephalosporium which accelerate the growth of plant... - [Comparative study of tissue cultured Dendrobium protocorm with natural Dendrobium candidum on immunological function]Jianping Gao Shanghai University of Traditional Chinese Medicine, Shanghai 200032 Zhong Yao Cai 25:487-9. 2002To compare the immunological function and acute toxicity of the tissue cultured protocorm from Dendrobium candidum with natural medicinal materials from Dendrobium candidum. - [Researches on morphology and anatomy of root system of Dendrobium nobile Lindl.]M Zhang Chongqing Academy of Chinese Materia Medica, Chongqing 400065, China Zhongguo Zhong Yao Za Zhi 26:384-5. 2001OBJECTIVE: To understand the morphological and anatomical characteristics of Dendrobium nobile so as to provide scientific basis for its domestication and cultivation... - [Studies on tissue culture of Dendrobium chrysotoxum Lindl in vitro]H Xu China Pharmaceutical University, Nanjing 210038, Jiangsu, China Zhongguo Zhong Yao Za Zhi 26:378-81. 2001OBJECTIVE: To set up a system for the culture of Dendrobium chrysotoxum in vitro. METHOD: Tissue culture, fire fly luminescence and phenol-H2SO4 method... - [Comparison with the content of total alkaloid of Dendrobium nobile in different growing conditions]M Zhang Chongqing Academy of Chinese Materia Medica, Chongqing 400065 Zhong Yao Cai 24:707-8. 2001This article reported the contents of total alkaloid from Dendrobium nobile Lindel, in different growing conditions... - [Effects of additives on tissue culture and rapid propagation of Dendrobium canducum]Lin Jiang Zhongkai Agrotechnical College, Guangzhou 510225 Zhong Yao Cai 26:539-41. 2003Effects of additives on tissue culture and rapid propagation of Dendrobium canducum Walla. ex Lindal. by 1/2 MS added activated charcoal, plant growth regulators, banana juice, apple juice and potato juice were studied... - [Diseases and its control on Dendrobium]Songjun Zeng Botanical Garden, South China Institute of Botany, Chinese Academy of Sciences, Guangzhou 510520 Zhong Yao Cai 26:471-4. 2003The diseases on the Dendrobium plants and their occurrence and damage as well as control methods have been investigated and studied. 11 kinds of fungi, 4 kinds of bacteria, 3 kinds of virus and one root-knot nematode were recorded... - [Application of FTIR spectroscopy to the analysis of eleven kinds of Dendrobium]Xian kang Lv State medicinal Group Hanszhou Xinya Co, Ltd, Hangzhou 310003, China Zhongguo Zhong Yao Za Zhi 30:738-40. 2005To establish an FTIR method for the analysis of Dendrobium. - [Location and relative quantity of coumarins in the stem of Dendrobium thyrsiflorum]Yan Zheng School of Chinese Pharmacy Medicine, China Pharmaceutical University, Nanjing 210038, China Yao Xue Xue Bao 40:236-40. 2005To determine the location and relative quantity of coumarins in the stem of Dendrobium thyrsiflorum Rchb. f. , and to provide a scientific basis for evaluating and utilizing the famous medicinal plant. - [Induction and culture of hairy-root by Agrobacteriun rhizogenes in Dendrobium nobile]Fenghua Li Biology Department, Zunyi Normal College, Zunyi 563002 Zhong Yao Cai 27:712-4. 2004To obtain the hairy-roots of Dendrobium nobile. - [Studies on the growth, chemical components and physiological characteristics of F1 generation of Dendrobium huoshanense]Yong Ping Cai School of Life Science, Anhui Agricultural University, Hefei 230036, China Zhongguo Zhong Yao Za Zhi 30:1064-8. 2005Through a comparison between F1 and its' parents on the growth, chemical components and physiology, this study aims to find the possibility of selecting new dendrobium hybrids with high yield and good quality. - [Studies on anti-hyperglycemic effect and its mechanism of Dendrobium candidum]Hao Shu Wu Department of Pharmacology and Toxicology, Collage of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310031, China Zhongguo Zhong Yao Za Zhi 29:160-3. 2004To study the anti-hyperglycemic effect and its mechanism of Dendrobium candidum (DC). - [Studies on tissue culture of Dendrobium lituiflorum]Jun Chang School of Life Science, Nanjing University, Nanjing Normal 210097, China Zhongguo Zhong Yao Za Zhi 29:313-7. 2004To select optimal media and conditions for the tissue culture of Dendrobium lituiforum. - [Molecular identification of medicinal plants: Dendrobium chrysanthum, Dendrobium fimbriatum and their morphologically allied species by PCR-RFLP analyses]Ting Zhang Department of Pharmacognosy, China Pharmaceutical University, Nanjing 210038, China Yao Xue Xue Bao 40:728-33. 2005..To establish a simple method for molecular identification of original plants of D. chrysanthum and D. fimbriatum using molecular marker rDNA ITS region... - Enantioselective synthesis of (-)-dendroprimine and isomersAxelle De Saboulin Bollena Departement de Chimie, UMR 6504, CNRS et Université Blaise Pascal Clermont Ferrand, 63177 Aubiere, France J Nat Prod 67:1029-31. 2004..The first total synthesis of naturally occurring (-)-dendroprimine has been achieved in five steps... - [Studies on transplanting suspension-cultured protocorms of Dendrobium candidum onto solid culture medium]Pi yong Hou Institute of Medical Plant Development, Chinese Academy of Medicine Science and Peking Union Medicine college, Beijing 100094, China Zhongguo Zhong Yao Za Zhi 30:729-32. 2005The suspension-cultured protocorms of Dendrobium candidum were transplanted on the solid culture medium for studying the factors influencing their differentiation and growth. - [Effects of extracts of Chinese medicines on Ganoderma lucidum in submerged culture]Hailong Yang Key Laboratory of Industrial Biotechnology of Ministry of Education, Southern Yangtze University, Wuxi 214036, China Wei Sheng Wu Xue Bao 43:519-22. 2003..lucidum, but the ethanol extracts of Angelica sinensis, Dendrobium nobile check the growth of G. lucidum... - Natural phenanthrenes and their biological activityAdriána Kovács Department of Pharmacognosy, University of Szeged, Eotvos u 6, H 6720 Szeged, Hungary Phytochemistry 69:1084-110. 2008..number of phenanthrenes have been reported from higher plants, mainly in the Orchidaceae family, in the species Dendrobium, Bulbophyllum, Eria, Maxillaria, Bletilla, Coelogyna, Cymbidium, Ephemerantha and Epidendrum... - Moscatilin from Dendrobium nobile, a naturally occurring bibenzyl compound with potential antimutagenic activityM Miyazawa Department of Applied Chemistry, Kinki University, Kowakae, Osaka, Japan J Agric Food Chem 47:2163-7. 1999A bibenzyl compound that possesses antimutagenic activity was isolated from the storage stem of Dendrobium nobile... - Brevipalpus mites Donnadieu (Prostigmata: Tenuipalpidae) associated with ornamental plants in Distrito Federal, BrazilLetícia C Miranda Lab Quarentena Vegetal, EMBRAPA Recursos Geneticos e Biotecnologia, Brasilia, DF, Brazil Neotrop Entomol 36:587-92. 2007..Pelargonium hortorum L.H. Bailey, Hibiscus rosa-sinensis L. and orchids (Dendrobium and Oncidium)... - Elution-extrusion counter-current chromatography separation of five bioactive compounds from Dendrobium chrysototxum LindlShucai Li State Key Laboratory of Biotherapy and Cancer Centre, West China Hospital, West China Medical School, Sichuan University, Chengdu, China J Chromatogr A 1218:3124-8. 2011..the time point for extrusion was developed in theory and five bioactive compounds from the extract of Dendrobium chrysototxum Lindl. were separated and compared using normal CCC and EECCC method... - Functional analysis of a RING domain ankyrin repeat protein that is highly expressed during flower senescenceXinjia Xu Department of Plant Sciences, University of California Davis, One Shields Avenue, Davis, CA 95616, USA J Exp Bot 58:3623-30. 2007..in carnation petals was high during senescence, no expression was detected in three monocotyledonous flowers--daylily (Hemerocallis 'Stella d'Oro'), daffodil (Narcissus pseudonarcissus 'King Alfred'), and orchid (Dendrobium 'Emma White'). - [Effect of salicylic acid on cell growth and polysaccharide production in suspension cultures of protocorm-like bodies from Dendrobium huoshanense]Bo Wang School of Biotechnology and Food Engineering, Hefei University of Technology, Hefei 230009, China Sheng Wu Gong Cheng Xue Bao 25:1062-8. 2009Polysaccharides from Dendrobium huoshanense possess immunostimulating activity, antioxidant activity and anticataract activity... - Early in vitro flowering and seed production in culture in Dendrobium Chao Praya Smile (Orchidaceae)Kim Hor Hee Department of Biological Sciences, National University of Singapore, Singapore Plant Cell Rep 26:2055-62. 2007Plantlets of Dendrobium Chao Praya Smile maintained in vitro were induced to flower, which produced viable seeds within about 11 months... - Mycorrhizal fungi of Vanilla: diversity, specificity and effects on seed germination and plant growthAndrea Porras-Alfaro Departamento de Biologia, Universidad de Puerto Rico Rio Piedras, PO Box 23360, San Juan, Puerto Rico 00931 Mycologia 99:510-25. 2007..and Tulasnella were tested for effects on seed germination of Vanilla and effects on growth of Vanilla and Dendrobium plants. We found significant differences among fungi in effects on seed germination and plant growth... - New phenanthrenes and stilbenes from Dendrobium loddigesiiMasanobu Ito Research Unit of Pharmacognosy, School of Pharmacy, Nihon University, 7 7 1 Narasinodai, Funabashi, Chiba 274 8555, Japan Chem Pharm Bull (Tokyo) 58:628-33. 2010..A (1) and B (7), and stilbenes, loddigesiinols C (8) and D (9), were isolated from 80% ethanol extract of Dendrobium loddigesii ROLFE (Orcharedceae) along with known compounds, including five phenanthrenes, three stilbenes, two .. - [Pyrolysis-gas chromatographic fingerprints with hierarchical cluster analysis for Dendrobium candidum Wall. ex Lindl]Lili Wang College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310014, China Se Pu 26:613-7. 2008The pyrogram fingerprints of Dendrobium candidum Wall. ex Lindl. from different regions were studied by pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS) and compared with hierarchical cluster analysis... - Do plastids in Dendrobium cv. Lucky Duan petals function similar to autophagosomes and autolysosomes?Wouter G van Doorn Mann Laboratory, Department of Plant Sciences, University of California, Davis, USA Autophagy 7:584-97. 2011..Here, we report that plastids in Dendrobium cv. Lucky Duan petals produced an endocytosis-like invagination of the two outer membranes... - Identification of dendrobium species used for herbal medicines based on ribosomal DNA internal transcribed spacer sequenceTomoko Takamiya School of Pharmacy, Nihon University, Japan Biol Pharm Bull 34:779-82. 2011Stems of genus Dendrobium (Orchidaceae) have been traditionally used as an herbal medicine (Dendrobii Herba) in Eastern Asia. Although demand for Dendrobium is increasing rapidly, wild resources are decreasing due to over-collection... - Functional analysis of three lily (Lilium longiflorum) APETALA1-like MADS box genes in regulating floral transition and formationMing Kun Chen Graduate Institute of Biotechnology, National Chung Hsing University, Taichung, Taiwan 40227, ROC Plant Cell Physiol 49:704-17. 2008..LMADS6 is closely related to LMADS5 whereas LMADS7 is more related to DOMADS2, an orchid (Dendrobium) gene in the SQUA subfamily... - [Effects of germanium on cell growth, polysaccharide production and cellular redox status in suspension cultures of protocorm-like bodies of Dendrobium huoshanense]Ming Wei Department of Biology and Chemistry, Anhui University of Technology and Science, Wuhu 241000, China Sheng Wu Gong Cheng Xue Bao 26:371-7. 2010..the problem of low growth rate and metabolism level in suspension cultures of protocorm-like bodies (PLBs) of Dendrobium huoshanense... - Earliest orchid macrofossils: Early Miocene Dendrobium and Earina (Orchidaceae: Epidendroideae) from New ZealandJohn G Conran Australian Centre for Evolutionary Biology and Biodiversity, Discipline of Ecology and Evolutionary Biology, DP312, School of Earth and Environmental Sciences, The University of Adelaide, SA 5005 Australia Am J Bot 96:466-74. 2009Fossil leaves of two Early Miocene orchids (Dendrobium and Earina) are reported from New Zealand... - Isolation and identification of Rhizoctonia-like fungi from roots of three orchid genera, Paphiopedilum, Dendrobium, and Cymbidium, collected in Chiang Rai and Chiang Mai provinces of ThailandSureeporn Nontachaiyapoom School of Science, Mah Fah Luang University, 333 Moo 1, Thasud, Muang District, Chiang Rai 57100, Thailand Mycorrhiza 20:459-71. 2010Three orchid genera, Paphiopedilum, Cymbidium, and Dendrobium, are among the most heavily traded ornamental plants in Thailand... - A new phenanthrenequinone from Dendrobium draconisBoonchoo Sritularak Department of Pharmacognosy and Pharmaceutical Botany, Faculty of Pharmaceutical Sciences, Chulalongkorn University, Bangkok, Thailand J Asian Nat Prod Res 13:251-5. 2011A number of Dendrobium species (Orchidaceae) have been used as health foods. In Thailand, the tea prepared from the stems of Dendrobium draconis Rchb.f. (Orchidaceae) has been used as a blood tonic... - Fast determination of five components of coumarin, alkaloids and bibenzyls in Dendrobium spp. using pressurized liquid extraction and ultra-performance liquid chromatographyJun Xu Institute of Chinese Medical Sciences, University of Macau, Macao SAR, P R China J Sep Sci 33:1580-6. 2010A fast method for simultaneous determination of five compounds (one coumarin, two alkaloids and two bibenzyls) in Dendrobium spp... - Confirmation by DNA analysis that Contarinia maculipennis (Diptera: Cecidomyiidae) is a polyphagous pest of orchids and other unrelated cultivated plantsN Uechi Entomological Laboratory, Graduate School of Bioresource and Bioenvironmental Sciences, Kyushu University, Fukuoka 812 8581, Japan Bull Entomol Res 93:545-51. 2003..This pest of Dendrobium flower buds in glasshouses is considered to have entered Hawaii, Florida and Japan from Southeast Asia, and was .. - The nucleotide sequence of the coat protein gene and 3' untranslated region of azuki bean mosaic potyvirus, a member of the bean common mosaic virus subgroupC W Collmer Department of Biological and Chemical Sciences, Wells College, Aurora, NY 13026, USA Mol Plant Microbe Interact 9:758-61. 1996..The deduced amino acid sequence of the CP is 94% identical to that of dendrobium mosaic virus, establishing the two as strains of the same virus...
<urn:uuid:08f6624d-5203-4f59-8e5f-4490ae0b2b44>
seed
This is an archive page. The links are no longer being updated. June 6, 2002 Mr. Chairman and Members of the Committee, I am Dr. Audrey Penn, Acting Director of the National Institute of Neurological Disorders and Stroke (NINDS). I am pleased to be here before you today to discuss our efforts in addressing stroke – the third leading cause of death in the United States after heart disease and cancer, and a leading cause of long-term disability. The National Institute of Neurological Disorders and Stroke at the National Institutes of Health (NIH) is the leading federal organization committed to research on improving stroke prevention, treatment, and recovery, through increased understanding of how to protect and restore the brain. Historically, NINDS has provided more funding to stroke research than to any other single disease or disorder within our mission. In Fiscal Year (FY) 2001, NINDS funding for stroke research was more than $117 million, and the NIH total was nearly $239 million. More importantly, our stroke programs impact all areas of scientific opportunity and public health priority – from stroke awareness to rehabilitation – and are advancing the state of cutting-edge knowledge about the ways to prevent, diagnose, treat, and educate the public and health professionals about stroke. As many of you know, a stroke is a "brain attack" caused by an interruption of blood flow to the brain. There are two different types of stroke – ischemic and hemorrhagic. Ischemic strokes occur when blood flowing to a region of the brain is reduced or blocked, either by a blood clot or by the narrowing of a vessel supplying blood to the brain. Approximately 80 percent of all strokes are ischemic. The remaining 20 percent of strokes are caused by the rupture of a blood vessel, and leakage of blood into the brain tissue. These hemorrhagic strokes can occur from the rupture of an aneurysm, which is a blood-filled sac ballooning from a vessel wall, or leakage from a vessel wall itself weakened by an underlying condition like high blood pressure. At every conceivable level, stroke is a tremendous public health burden to our country. More than 600,000 people experience a stroke each year. Of the more than 4 million stroke survivors alive today, many experience permanent impairments of their ability to move, think, understand and use language, or speak – losses that compromise their independence and quality of life. Furthermore, stroke risk increases with age, and as the American population is growing older, the number of persons at risk for experiencing a stroke is increasing. Over the past several decades, NINDS has supported some of the most significant achievements in stroke research, which have contributed to reductions in the death rate from stroke. We continue to be committed to reducing this burden. Historical Progress in Stroke Prevention and Treatment NINDS has a long and distinguished history of supporting productive clinical studies in the field of stroke prevention and acute treatment. Indeed, successes in prevention date back more than twenty years, and there has been remarkable progress in stroke prevention – which reflects sustained efforts of private organizations, NIH, and other government agencies. Stroke prevention is also highly cost-effective because it averts the direct costs of hospitalization and rehabilitation. As NINDS celebrated its 50th anniversary, the U.S. Centers for Disease Control and Prevention estimated that the age-standardized stroke death rate declined by 70 percent for the U.S. population from 1950 to 1996 [MMWR Weekly 48:649-56 1999], and the American Heart Association tallied a 15 percent decline just from 1988 to 1998. I would like to briefly summarize a few of the major NINDS-supported efforts, which have included dozens of clinical trials, that have contributed significantly to our knowledge of stroke. Several early studies investigated medical management approaches to the prevention of recurrent strokes in people with atrial fibrillation (AF). This irregular heart rate and rhythm is a common disorder in older Americans, and a significant stroke risk factor. It has been estimated that two million Americans, primarily over the age of 60, have AF and are six times more likely to have a stroke as a result. The drugs aspirin and warfarin had been used to prevent recurrent stroke in these individuals, however their use was based on little hard scientific evidence. To address this issue, NINDS supported a series of three trials in Stroke Prevention in Atrial Fibrillation – referred to as the SPAF trials. The SPAF I, II and III trials evaluated the use of aspirin and warfarin for stroke prevention in more than 3,800 human subjects. The SPAF I study reported in 1990 that both aspirin and warfarin were so beneficial in preventing stroke in patients with atrial fibrillation that the risk of stroke was cut by 50 to 80 percent. The results suggested that 20,000 to 30,000 strokes could be prevented each year with proper treatment. The SPAF II study results in 1994 identified the 60 percent of people with atrial fibrillation for whom a daily adult aspirin provides adequate protection against stroke with minimal complications. This group consists of those younger than 75 and those older than 75 with no additional stroke risk factors such as high blood pressure or heart disease. SPAF III, which included 1,044 patients at 20 medical centers in the U.S. and Canada, studied the remaining 40 percent of atrial fibrillation patients with additional risk factors for stroke and for whom warfarin had been shown to be effective. The study was stopped ahead of schedule in 1996 because early results clearly demonstrated the benefit of standard warfarin therapy over the combination therapy of aspirin and fixed-dose warfarin, in these high-risk patients. Other reports have estimated that the use of warfarin to prevent stroke in persons with AF costs as much as $1,000 annually, but a year of post-stroke treatment can cost $25,000. Based on these estimates, optimal use of standard warfarin therapy in the appropriate patients could prevent as many as 40,000 strokes a year in the U.S., and save nearly $600 million a year in health care costs. Other studies supported by the Institute, such as the Warfarin Antiplatelet Recurrent Stroke Study, the Vitamin Intervention for Stroke Prevention study, the African-American Antiplatelet Stroke Prevention Study, and the Women's Estrogen for Stroke Trial, build on these earlier findings, and continue to add to our knowledge about medical interventions that can affect the incidence of stroke in different at-risk groups. The NINDS has also supported several major studies of surgical approaches to the secondary prevention of stroke. This work has particular significance for people with carotid artery stenosis, a narrowing of the major blood vessels that supply the brain. One definitive study in the late 1970s examined a procedure called extracranial/intracranial (EC/IC) bypass. EC/IC bypass had been used for several years as a means to restore blood flow to the brain. The NINDS-funded study of the procedure's effectiveness found that the data did not support its continued use in medical practice to prevent stroke. These findings were of significant benefit to patients, who could avoid the risks and costs of this surgery, and to researchers, who used this information to redirect their attention to other promising approaches. As a result, investigators explored an alternative surgical strategy, called carotid endarterectomy, which involves the removal of fatty deposits, or plaque, in the carotid arteries. In two NINDS-funded trials – the North American Symptomatic Carotid Endarterectomy Trial (NASCET), and the Asymptomatic Carotid Atherosclerosis Study (ACAS) – this approach was examined more extensively. The results of the 12-year NASCET trial were reported in two stages. The investigators' early data led to a radical change in the recommended treatment for severe (70-99 percent) carotid stenosis, or blockage, when it was determined that, together with appropriate medical care, carotid endarterectomy for patients with severe blockage prevented more strokes than did medical treatment alone. NINDS responded to this finding by halting the part of the study involving patients with severe blockage, and issuing a nationwide alert to physicians asking them to consider the study results in making recommendations to their patients. The rest of the study focused on determining the efficacy of this surgery for symptomatic patients with moderate carotid stenosis (30-69 percent blockage). Those results showed that patients with the higher grades of moderate stenosis (50-69 percent) clearly benefit from surgery. There was no significant benefit for patients with less than 50 percent stenosis. As a result of the NASCET trial, patients with moderate stenosis are better able to decide whether to risk surgery in order to prevent possible future strokes. In the ACAS trial, carotid endarterectomy was found highly beneficial for persons who are symptom-free, but have a carotid stenosis of 60 to 99 percent. In this group, the surgery reduces the estimated 5-year risk of stroke by more than one-half, from about 1 in 10 to less than 1 in 20. To the long list of studies contributing to improvements in secondary stroke prevention, we can add a more recent NINDS-funded trial, which resulted in the first FDA-approved acute treatment for ischemic stroke, in 1996. This therapy – tissue plasminogen activator or t-PA – dissolves blood clots and restores blood flow, if given intravenously within the first three hours after an ischemic stroke. Patients must be screened carefully before receiving t-PA, since it is not appropriate for use in treating hemorrhagic stroke, and should not be given beyond the three-hour window. However, in carefully selected patients, use of t-PA can achieve a complete recovery. Unfortunately many, indeed most, stroke patients do not receive t-PA because they do not arrive at the hospital in time to be evaluated and treated within the crucial three-hour window of effectiveness. Or, in many cases, hospitals are not prepared to rapidly identify and treat these patients. It is this dual challenge that NINDS is actively pursuing through the development of model systems and through education and outreach efforts that are discussed later in my testimony. Within the framework of these historical successes, NINDS continues to build its basic science and clinical stroke programs, and to reap the rewards of past investments. A sampling of these recent advances includes: The use of medical therapy to prevent recurrent stroke in people without cardiac risk factors As described above, past clinical studies provided important information about preventing recurrent stroke in people with cardiac arrhythmias. However, it has been difficult for physicians to choose between aspirin and warfarin for patients who do not present with cardiac risk factors. To help address these questions, another large clinical trial – the Warfarin versus Aspirin Recurrent Stroke Study (WARSS) was initiated with NINDS support. More than 2000 individuals with a history of stroke unrelated to cardiac problems participated in this study, with equal groups receiving aspirin and warfarin. After two years of treatment, there was no significant difference in the prevention of recurrent stroke or death, or in the rate of brain hemorrhage, in the aspirin and warfarin groups. This finding will likely have a major impact on the standard of care for this group of stroke survivors, since aspirin is considerably less expensive, safer, and easier to administer than warfarin. The use of the "warning signs" of stroke to aid in prevention Recently, NINDS-funded researchers evaluated the risk of stroke after a transient ischemic attack (TIA), or "mini-stroke." The symptoms of TIAs pass quickly, within a day or even hours, and are often ignored. After following 1700 people with a TIA, the study found that these episodes warn of a dramatically increased likelihood of experiencing a stroke within the subsequent 90-day period. Other risk factors, such as advanced age, other health conditions, and severity of the TIA, also helped to predict stroke risk, and may be useful in determining whether patients should be hospitalized immediately and/or receive preventive interventions following a TIA. The development of clinical tools that can be used to predict stroke recovery In order to offer clinicians the best possible methods for evaluating patients after a stroke, intramural investigators at NINDS have explored the types of clinical measurements and diagnostic tools that might be used to predict how well a person will recover from a stroke. They found that the combined use of a unique type of magnetic resonance imaging, the score on the NIH Stroke Scale – a diagnostic tool developed at NINDS for evaluating stroke patients, and the time from the onset of symptoms to the brain scan, can effectively predict the extent of stroke recovery. Future studies will focus on the potential of computerized tomography (CT) scanning to predict recovery as this is a technology more commonly available in most hospitals. We expect that all of these tools will help physicians manage patients more efficiently and reduce distress and anxiety among patients and their families. Over the last several years research has revealed the remarkable extent of brain plasticity – that is, the capacity of the brain to change in response to experience or injury. Scientists are now using brain imaging techniques that reveal the activity of brain cells, as well as structure, to understand why some patients recover lost abilities following stroke and others do not. In other efforts, researchers are trying to apply what has been learned about brain plasticity to encourage stroke recovery through a method called "constraint-induced therapy." This therapy involves constraining an unaffected extremity while actively exercising the affected one, thereby inducing use-dependent brain reorganization. The use of stem cells to treat stroke in animal models Stem cells are immature cells that can multiply and form more specialized cell types. Recent animal studies have provided evidence that transplanted stem cells can help restore brain function after stroke. Other animal research suggests that the adult brain may itself have a latent capacity to regenerate new cells following stroke, which might be encouraged in efforts to repair the brain. The continuing efforts to develop these approaches to restoration of function in survivors of stroke build on active NINDS support to understand the basic biology of animal embryonic stem cells and adult human stem cells. Within the President's policy guidelines, the Institute is encouraging research to evaluate the capabilities of human embryonic stem cells. Current Stroke Initiatives The generous appropriations provided by Congress have made it possible for us to expand our programs in stroke, and we are grateful for the opportunity. Since the doubling of the NIH budget began in FY 1999, the Institute has initiated many new clinical and basic science projects. Currently, the Institute is supporting 14 Phase III clinical trials in stroke, eight of which have been initiated since the start of the doubling effort. Even more importantly, the doubling effort has enabled NINDS to fund 17 Phase I and II clinical trials in stroke. These numbers are impressive and indicate that many novel prevention strategies, therapeutic interventions, and rehabilitation techniques for stroke are closer to the clinic as a result of the significant investments in NIH over the past several years. Areas of clinical research that are under exploration include the use of hypothermia to improve outcome following aneurysm surgery, the use of magnesium to treat stroke, and improvements in stroke imaging techniques. Several studies, including research in the NINDS intramural program at the NIH Clinical Center, are examining various strategies for rehabilitation after stroke including the use of constraint therapy, exercise, anesthesia, and electrical stimulation to improve functional recovery. NINDS also continues to be committed to exploring stroke at the basic science level, and has provided funding for many new projects since the doubling effort began. These include studies of procedures and drugs that may protect the brain against further injury, a possible vaccine for stroke, the role of inflammation, the expression of genes and proteins in response to stroke, and pre-clinical testing of therapies – just to name a few. Cellular "communications" between blood vessels, neurons, and glia, and the role of the blood-brain barrier, are also subjects of intense interest. In addition to studies specifically targeted to stroke, NINDS also provides support for many areas of basic neuroscience research that have broad applicability to stroke and other brain injuries. These include mechanisms of cell survival and death, neural growth factors, stem cell therapy, neuronal plasticity, and glial cell biology. In addition to the investigator-initiated projects that make up the core of our grant programs, NINDS is constantly looking for understudied areas in stroke research that the Institute could address through the use of targeted initiatives. Several years ago, NINDS identified a need for acute stroke centers, and in May 2001, we issued a grant solicitation for Specialized Programs of Translational Research in Acute Stroke (SPOTRIAS). The goal of the SPOTRIAS program is to reduce disability and mortality in stroke patients, by promoting rapid diagnosis and effective interventions. It will support a collaboration of clinical researchers from different specialties whose collective efforts will lead to new approaches to early diagnosis and treatment of acute stroke patients. In its report language for the Institute's FY2001 appropriation, the Senate also encouraged the creation of acute stroke research or treatment research centers to provide rapid, early, continuous 24-hour treatment to stroke victims, and noted that a dedicated area in a medical facility with resources, personnel and equipment dedicated to treat stroke, would also provide an opportunity for early evaluation of stroke treatments. The SPOTRIAS program is responsive to the recommendation highlighted by the Senate. Institutions supported under this program must be able to deliver rapid treatment for acute stroke and to conduct the highest quality translational research on the diagnosis and treatment of acute ischemic and hemorrhagic stroke. They will also help to recruit and train the next generation of stroke researchers. The SPOTRIAS initiative will facilitate the translation of basic research findings into clinical research, and ultimately, the incorporation of clinical research findings into clinical practice. The first two centers have recently been approved for funding under this program, and as more centers are added, it is expected that they will form a national network that will lead to significant changes in the care of stroke patients. On a more local level, NINDS is also developing the "Acute Brain Attack Research Program" in the Baltimore-Washington Area. This effort has already established a 24-hour stroke research program in diagnosis and treatment at Suburban Hospital in Bethesda, Maryland, and our plan is to replicate this program in other medical facilities in the Baltimore-Washington metropolitan area, next targeting those serving predominantly inner city minority populations. Stroke Research Planning While a significant knowledge base about stroke has been amassed through research supported by the NINDS, continually emerging discoveries and new technologies create constantly increasing research needs and scientific opportunities. Coupled with the increases in the NINDS budget as a result of the recent NIH doubling effort, it is necessary to identify clear scientific priorities, so that the Institute can determine the best uses for its resources. Such priorities will also serve as benchmarks for the broader scientific community against which progress can be measured. NINDS convened a Stroke Progress Review Group (Stroke PRG) to identify priorities in stroke research. The Stroke PRG had its origins in Fiscal Year 2001 report language from the House and Senate Appropriations Committees to the NINDS urging us to develop a national research plan for stroke. Following on the success of the Brain Tumor Progress Review Group, a joint collaboration between NINDS and the National Cancer Institute to identify priorities for research on brain tumors, NINDS decided to use a Progress Review Group to develop a plan for stroke research. Members of the Stroke PRG include approximately 140 prominent scientists, clinicians, consumer advocates – including leaders from the American Stroke Association and the National Stroke Association, industry representatives, and participants from other NIH Institutes. Together, these individuals represent the full spectrum of expertise required to identify and prioritize scientific needs and opportunities that are critical to advancing the field of stroke research. At the Stroke PRG Roundtable meeting in July 2001, and in many subsequent discussions, the Stroke PRG report was developed – a comprehensive document that identifies the national needs and opportunities in the field of stroke research. The final draft of this report was submitted for deliberation and acceptance by the National Advisory Neurological Disorders and Stroke Council in February, and the final report was published in April 2002. The PRG report will be widely disseminated to the stroke community, and is available online at www.ninds.nih.gov (Search: Stroke PRG); copies were provided to the Committee earlier this week. Several areas of scientific need are identified in the Stroke PRG report, but five consensus priorities emerged from the PRG: Participants also identified a number of scientific resource priorities including: The full PRG report expands on all of these issues, and provides in-depth analysis of the status of 15 different fields of stroke research. As we move forward from the planning process into the implementation phase, the Stroke PRG members will work with NINDS staff to "map" the Institute's current stroke research efforts to the recommendations of the report. Using this approach, we will be able to identify existing research gaps and target resources, and to incorporate these into a formal implementation plan. Health Disparities in Stroke NINDS recognizes that stroke is one of several neurological disorders that has a disproportionate effect on minority and underserved populations. For example, African Americans are twice as likely to die of stroke or complications from stroke as people in any other racial or ethnic group in the country, and Hispanics have a stroke rate two times higher than that of Caucasians. For this reason, we have identified stroke as a critical health disparities issue in several Institute planning efforts: health disparities in stroke was considered as an over-arching issue by the Stroke PRG panel; stroke is one of the top research priorities in the NINDS Five-Year Strategic Plan on Minority Health Disparities; and the Institute is also in the process of establishing a planning panel that will specifically address health disparities in stroke. The NINDS is also working to establish prevention/intervention research networks throughout the extramural community, particularly in regions of the "Stroke Belt," an area in the Southeastern U.S. with stroke mortality rates approximately 25 percent above the rest of the nation. The goal is to foster stronger linkages between investigators at minority and majority institutions and community-based organizations in order to improve minority recruitment and retention in clinical studies – as one way of addressing health disparities. As part of this program, NINDS, working with the National Heart, Lung and Blood Institute (NHLBI) and the National Center for Research Resources, is developing the "Stroke and Cardiovascular Prevention-Intervention Research Program." The pilot phase of this program is at the Morehouse School of Medicine in Atlanta, Georgia. In addition to these programs, NINDS supports a number of ongoing clinical projects that specifically address stroke in minority populations, including a new study that will examine the phenomenon of the "Stroke Belt." In this study, the role of geographic and racial differences as contributors to differential mortality rates will be examined and risk factors estimated. We are also engaged in targeting special public education efforts to minority populations, as I will describe later in my testimony. Stroke in Women In addition, we recognize that stroke is a major health problem for women. To address this critical research area, NINDS is supporting studies that will help us to better understand gender differences in stroke. Specific projects include a clinical study to determine if hormone replacement therapy affects stroke severity, and a study examining blood flow in the brain and the role of female hormones in protecting brain tissue during ischemia. In all clinical trials, we ensure that appropriate numbers of women are enrolled, and many of these trials involve specific analyses to examine the effects of the intervention tested in the female participants. For example, we are currently supporting a clinical study that is comparing the efficacy of two procedures – carotid endarterectomy and carotid stenting – that unblock a clogged carotid artery in the neck, a significant risk factor for stroke. Previous research has shown that women may not benefit from carotid endarterectomy as much as men do, so one facet of the trial will examine gender differences in these procedures. Education and Outreach Programs NINDS recognizes that supporting research into new prevention strategies and treatment options is only part of the battle in reducing the health burden of stroke. Helping people to recognize that they are having a stroke, so that they can seek help immediately, is a critical first step. To address this problem, the NINDS directs an extensive health promotion effort to raise awareness of the signs and symptoms of stroke, the need for urgent action if experiencing a stroke, and the possibility of a positive outcome with timely hospital treatment. In May 2001, the NINDS launched the "Know Stroke. Know the Signs. Act in Time" campaign, a multi-faceted public education campaign to educate people about how to recognize stroke symptoms, and then to call 911 to get to a hospital quickly for treatment. The campaign's target audiences are those most at-risk for stroke – primarily people over the age of 50 – and their family members, caregivers and health care providers. Because stroke attacks the brain, a stroke patient often cannot act alone to call 911 and seek medical treatment, so bystanders are integral to acting quickly and getting stroke patients to the hospital. For this activity, the NINDS developed a wide variety of public education materials including airport dioramas jointly sponsored with the National Stroke Association, billboard displays, an award-winning eight minute film, consumer education brochures, exhibits, and new radio and television public service announcements (PSAs). All indications are that the "Know Stroke" campaign has been extremely well-received and effective. The television PSA garnered more than 87 million viewer impressions and hundreds of thousands of dollars worth of free broadcast time; the radio PSAs received more than 46,000 broadcasts on 272 stations; the airport dioramas received more than 800 million annual impressions; and thousands of nursing homes, hospitals, senior centers and other organizations have received consumer education materials. All of our public education strategies are designed to increase awareness of stroke. However, since the problem of stroke is even more acute in the African American and Hispanic communities, some are targeted to specific at-risk minority communities. These campaigns started with outreach to the media in May 2002 for Stroke Awareness Month and, in the coming months and years, will include public service advertising and grassroots community education components. NINDS also co-sponsored a "Stroke Sunday" program in October 2000, with the American Stroke Association and the Black Commissioned Officers' Advisory Group of the U.S. Public Health Service. This program was led by the former U.S. Surgeon General, Dr. David Satcher, and I participated on behalf of the NINDS. Held at a Rockville, Maryland church, the event was designed to bring attention to the major impact of stroke in the African American community and to help inform participants about reducing their stroke risk. NINDS also participates in "Operation Stroke," a coalition of health care professionals, allied health providers, civic leaders and representatives of community organizations for stroke education. This effort is being coordinated by the American Stroke Association, and is aimed at the public as well as medical professionals. An intramural investigator at NINDS, who is a stroke clinician, is chairing this coalition in the greater D.C. and Maryland suburban areas. Finally, NINDS has held several meetings and workshops to help educate health care professionals about advancements in stroke research, like t-PA. For example, our Institute held a major national scientific meeting after the publication of the t-PA study that involved more than 400 medical professionals. We plan to convene another conference later this year to revisit stroke treatment, and to explore how more people can be encouraged to recognize stroke as an emergency medical situation. The Institute hopes to use this symposium to educate healthcare professionals about the benefits of early treatment for all stroke patients. In addition, NINDS scientists speak at medical meetings all over the country in order to educate physicians about effective stroke care, and our grantees produce educational videos and offer continuing medical education courses on proper administration of t-PA. To complement these efforts, NINDS also distributes free copies of the NIH Stroke Scale. As part of our ongoing prevention efforts, we have formed collaborative relationships with other NIH Institutes and federal agencies, and numerous voluntary organizations. NINDS coordinates the Brain Attack Coalition – a group of professional, voluntary, and government groups dedicated to reducing the occurrence, disabilities, and death associated with stroke – to increase awareness of stroke symptoms. To encourage improvements in stroke care, the Brain Attack Coalition published an article in June 2000 designed to help physicians and hospitals set up stroke centers. In February 2001, the NINDS signed a memorandum of understanding (MOU) with NHLBI, the Centers for Disease Control and Prevention (CDC), the HHS Office of Disease Prevention and Health Promotion, and the American Heart Association to foster cooperation in reaching the heart disease and stroke goals for the nation articulated in the Healthy People 2010 initiative. These goals include: the prevention of risk factors for cardiovascular disease (CVD) and stroke; the detection and treatment of risk factors; the early identification and treatment of CVD and stroke, especially in their acute phases; and the prevention of recurrent CVD and stroke, and their complications. In order to achieve these goals, we will work with the participating partners on focused initiatives such as population- and community-based public education and health promotion programs; activities to bring about improvements in the nation's cardiovascular health care delivery systems; media-based public awareness campaigns about the warning signs and symptoms of heart attack and stroke; promoting professional education and training, and other activities. CDC has already used our public education materials in cooperation with their networks, and we are enthusiastic about this partnership, and anticipate that it will continue for the next several years. NINDS is also participating in the development of a comprehensive National Action Plan for Cardiovascular Health - A Comprehensive Public Health Strategy to Combat Heart Disease and Stroke. This planning process was initiated last year by the CDC. It will chart a course for the CDC with the states, territories and other partners – including public health agencies, health care providers, and the public – for achieving national goals for heart disease and stroke prevention over the next two decades. The pillars of this public health strategy incorporate the three core functions of public health: assessment, policy development, and assurance. NINDS has made, and continues to make, significant contributions to the achievements in stroke prevention, treatment, and rehabilitation, and we are extremely proud of our accomplishments. However, the incremental nature of progress in stroke prevention has confirmed that there is no easy route to success. There are still difficult challenges to be addressed, and we have invested more than a year in gathering recommendations from the best clinicians and researchers in the field, as well as our committed partners in the advocacy community, in order to help us make the best use of our resources. Our planning efforts tell us we must continue to pursue, in parallel, several areas of basic, translational, and clinical research that may have an impact on stroke. We must find better ways to prevent strokes before they occur. We must improve upon and encourage acceptance of pioneering diagnostic tools and acute treatments for when stroke happens. We must capitalize on the prospect, for the first time, of actually repairing the brain damaged by stroke and recovering function. The broad portfolio of NINDS research on stroke offers a glimpse of what the future might bring – the possibility of vaccines, genetic tests to tailor preventive measures for each individual, studies that may link infections or inflammation within blood vessels to stroke, biological markers that could aid in the identification of stroke risk, and new information about how chronic stress and hormones may affect susceptibility to stroke damage. Encouraged by the recent progress in neuroscience, guided by extensive and inclusive planning, and enabled by the support from Congress, I assure you that NINDS is committed to pursuing all of these opportunities to alleviate the devastating effects of stroke on our society. Thank you again for the opportunity to speak with you today. I would be happy to answer any questions you may have. HHS Home (www.hhs.gov) | ASL Home (www.hhs.gov/asl/) | Disclaimers (www.hhs.gov/Disclaimer.html) | Privacy Notice (www.hhs.gov/Privacy.html) | FOIA (www.hhs.gov/foia/) | Accessibility (www.hhs.gov/Accessibility.html) | Contact Us (www.hhs.gov/ContactUs.html) Last revised: June 28, 2002
<urn:uuid:998aaa14-97e9-4a85-9e45-af6901b40203>
seed
Items in AFP with MESH term: Osteoporosis, Postmenopausal ABSTRACT: Osteoporosis is characterized by low bone mineral density and a deterioration in the microarchitecture of bone that increases its susceptibility to fracture. The World Health Organization defines osteoporosis as a bone mineral density that is 2.5 standard deviations or more below the reference mean for healthy, young white women. The prevalence of osteoporosis in black women is one half that in white and Hispanic women. In white women 50 years and older, the risk of osteoporotic fracture is nearly 40 percent over their remaining lifetime. Of the drugs that have been approved for the prevention or treatment of osteoporosis, the bisphosphonates (risedronate and alendronate) are most effective in reducing the risk of vertebral and nonvertebral fractures. Risedronate has been shown to reduce fracture risk within one year in postmenopausal women with osteoporosis and in patients with glucocorticoid-induced osteoporosis. Hormone therapy reduces fracture risk, but the benefits may not outweigh the reported risks. Teriparatide, a recombinant human parathyroid hormone, reduces the risk of new fractures and is indicated for use in patients with severe osteoporosis. Raloxifene has been shown to lower the incidence of vertebral fractures in women with osteoporosis. Salmon calcitonin is reserved for use in patients who cannot tolerate bisphosphonates or hormone therapy. Fracture Prevention in Postmenopausal Women - Clinical Evidence Handbook Alendronate for Fracture Prevention in Postmenopause - Cochrane for Clinicians Soy: A Complete Source of Protein - Article ABSTRACT: Soybeans contain all of the essential amino acids necessary for human nutrition and have been grown and harvested for thousands of years. Populations with diets high in soy protein and low in animal protein have lower risks of prostate and breast cancers than other populations. Increasing dietary whole soy protein lowers levels of total cholesterol, low-density lipoproteins, and triglycerides; may improve menopausal hot flashes; and may help maintain bone density and decrease fractures in postmenopausal women. There are not enough data to make recommendations concerning soy intake in women with a history of breast cancer. The refined soy isoflavone components, when given as supplements, have not yielded the same results as increasing dietary whole soy protein. Overall, soy is well tolerated, and because it is a complete source of protein shown to lower cholesterol, it is recommended as a dietary substitution for higher-fat animal products. Diagnosis and Treatment of Osteoporosis - Article ABSTRACT: Osteoporosis affects approximately 8 million women and 2 million men in the United States. The associated fractures are a common and preventable cause of morbidity and mortality in up to 50 percent of older women. The U.S. Preventive Services Task Force recommends using dual energy x-ray absorptiometry to screen all women 65 years and older and women 60 to 64 years of age who have increased fracture risk. Some organizations recommend considering screening in all men 70 years and older. For persons with osteoporosis diagnosed by dual energy x-ray absorptiometry or previous fragility fracture, effective first-line treatment consists of fall prevention, adequate intake of calcium (at least 1,200 mg per day) and vitamin D (at least 700 to 800 IU per day), and treatment with a bisphosphonate. Raloxifene, calcitonin, teriparatide, or hormone therapy maybe considered for certain subsets of patients. Screening for Osteoporosis in Postmenopausal Women: Recommendations and Rationale - U.S. Preventive Services Task Force Postmenopausal Osteoporosis and Estrogen - Editorials Screening for Osteoporosis in Postmenopausal Women - Putting Prevention into Practice ACOG Releases Guidelines for Clinical Management of Osteoporosis - Practice Guidelines
<urn:uuid:ae035bac-aa34-4fb5-a29a-39645788989b>
seed
By Flavio Guzmán, M.D. Beta receptors are a subtype of adrenergic receptor (adrenoceptor), their activation triggers a sympathomimetic (adrenergic) response. This article overviews the characteristics related to their physiology and pharmacological aspects. In addition, a related article discusses alpha receptors. Structure and general characteristics Beta receptors are G-protein coupled receptors, they act by activating a Gs protein. Gs activates adenylyl cyclase, leading to an increase in levels of intracellular cAMP. Increased cAMP activates protein kinase A, which phosphorylates cellular proteins. Beta receptors are characterized by a strong response to isoproterenol, with less sensitivity to epinephrine and norepinephrine. The rank order in terms of potency is the following: isoproterenol > epinephrine > norepinephrine Beta receptors are subdivided into three subgroups, beta 1, 2, and 3. This division is mainly based on their affinities to adrenergic agonists and antagonists. Beta 1 receptors are located mainly at the heart and the kidney, their main effects are depicted below. - Increase in chronotropy (heart rate) and inotropy (force of contraction) Tachycardia results from a Beta 1 mediated increase in the rate of phase 4 depolarization of sinoatrial node pacemaker cells. The inotropic effect is mediated by increased phosphorylation of Ca ++ channels, including calcium channels in the sarcolemma and phospholamban in the sarcoplasmatic reticulum - Increase in AV- node conduction velocity Beta 1 stimulated increase in Ca entry increases the rate of despolarization of AV node cells. Beta 1 receptors are present mainly on yuxtaglomerular cells, where receptor activation causes renin release. In this section Beta 2 receptors will be studied in two diagrams. The first highlights effects after Beta 2 activation in two systems (respiratory and reproductive), this is viewed separately because of the clinical relevance of Beta 2 agonists in clinical practice. The second figure shows the remaining sympathomimetic effects elicited by Beta 2 receptor activation in other systems. Bronchial smooth muscle Beta 2 receptor activation promotes bronchodilation, this physiological property is enhanced by inhaled Beta 2 agonists used in the treatment of asthma and COPD. Some drugs under this category include: salbutamol (albuterol in the US), salmeterol, formoterol and terbutaline. Drugs that bind to Beta 2 receptors (Beta 2 agonists) are used in the treatment of premature labour, this clinical application illustrates how Beta 2 receptors mediate tocolysis on the uterine muscle. Ritodrine is an example of a tocolytic drug. Bladder detrusor muscle: adrenergic activation of Beta 2 receptors at the bladder promotes relaxation. Bladder constriction is activated by the parasympathetic system, therefore drugs that activate muscarinic receptors such as bethanechol are used in the treatment of urinary retention. Eye ciliary muscle: this muscle controls eye accomodation and regulates the flow of aqueous humour. Its sympathetic innervation is mediated by Beta 2 receptors. GI tract: adrenergic activation of the gastrointestinal tract produces a slowing of peristaltic movements (decreased motility) and secretions. These changes are mediated by Beta 2 receptors. Liver: hyperglycemia and lipolysis occur when Beta 2 receptors are activated. Glucose metabolism is potentiated through gluconeogenesis and glycogenolysis. Vascular smooth muscle: while Alpha 1 receptors mediate vasoconstriction, Beta 2 receptors induce vasodilation in muscle and liver. It has been recently proposed that Beta 3 receptors are linked to the regulation of body weight. Located mainly in adipose tissue, Beta 3 receptors promote lipolysis. References and further reading
<urn:uuid:3af85217-8824-4a03-b4b3-4bd5a8463003>
seed
| HOME | A 3D Mammogram Doctors agree that early detection is the best defense against breast cancer. If we find cancer in its earliest stages, the chances of surviving it are good. Until now, the best way to do that has been with digital mammography. Today, a new technology called breast tomosynthesis - or 3D mammography - will help doctors find very small cancers and rule out "false positives" – reducing the number of women who are called back for diagnostic mammograms. Breast tomosynthesis allows doctors to examine breast tissue one layer at a time. It may be used in conjunction with traditional digital mammography as part of your annual screening mammogram to capture more breast images. Very low X-ray energy is used during the screening examination so your radiation exposure is safely below the American College of Radiology (ACR) guidelines. What is breast tomosynthesis or 3D mammogram? Breast tomosynthesis uses high-powered computing to convert digital breast images into a stack of very thin layers or "slices" – building what is essentially a "3-dimensional mammogram". During the tomosynthesis part of the exam, the X-ray arm sweeps in a slight arc over the breast, taking multiple breast images in just seconds. A computer then produces a 3D image of your breast tissue in one millimeter layers. Now the radiologist can see breast tissue detail in a way never before possible. Instead of viewing all the complexities of your breast tissue in a flat image, the doctor can examine the tissue a millimeter at a time. Fine details are more clearly visible, no longer hidden by the tissue above and below. View the video below which explains the process of a 3D mammogram. How is my exam different? A 3D mammogram exam is very similar to a traditional mammogram. Just as with a digital mammogram, the technologist will position your compress your breast under a paddle and take images from different angles. A 3D mammogram exam may be used as a screening tool in conjunction with a traditional digital mammogram or may be used by itself for a diagnostic mammogram. During the tomosynthesis portion of the exam, your breast will be under compression while the x-ray arm of the mammography machine makes a quick arc over the breast, taking a series of breast images at a number of angles. This will only take a few seconds and all of the images are viewed by the technologist at their computer workstation to ensure they have captured adequate images for review by a radiologist. The whole procedure time should be approximately the same as that of a digital mammogram. The technologist sends your breast images electronically to the radiologist, who studies them and reports results to either your physician or directly to you.
<urn:uuid:da634a33-ab26-4a6c-9df1-bce231669002>
seed
Tips from Other Journals White Coat Hypertension: Not a Benign Condition Am Fam Physician. 1999 Jan 15;59(2):455. Patients whose blood pressure readings are elevated when measured in a clinical setting but are within normal range in nonmedical environments are considered to have “white coat,” or transient, hypertension. Although it traditionally has been regarded as a benign condition, recent studies have revealed correlations between white coat hypertension and unfavorable risk factors in young adults. Muscholl and colleagues studied left ventricular structure and function in patients with white coat hypertension. During the community-based study, data were collected, and echocardiographic studies were performed on 845 men and 832 women who were 25 to 74 years of age. Repeated blood pressure readings were taken by technicians, using standardized protocols. The technicians did not wear white coats, and the environment was structured to appear informal and nonclinical. Patients were not told their blood pressure levels until three readings had been taken. Within 60 minutes of the technicians' readings, blood pressure levels were measured by a physician wearing a white coat who was introduced as a cardiologist. Echocardiography was performed immediately before the blood pressure measurement was taken by the physician. Patients meeting the criteria for white coat hypertension had blood pressure readings that were normal (less than 140/90 mm Hg) when taken by a technician but reached hypertensive levels (160/95 mm Hg or higher) when recorded by a physician. The prevalence of white coat hypertension was higher in men (10.9 percent) than in women (8.2 percent); overall, 10 percent of the study population displayed the condition. Even after adjusting for age, sex and body mass index, patients with white coat hypertension had significantly increased left ventricular mass indexes. The increased cardiac mass was due to increased thickness of the posterior wall and the septum of the left ventricle. Patients with white coat hypertension did not show abnormal systolic function of the left ventricle and had no abnormalities of diastolic filling or left atrial size. The authors conclude that white coat hypertension is a common condition that can no longer be dismissed as benign or as having no consequence. They believe the finding is related to an increased risk of left ventricular hypertrophy and cardiac remodeling. They advocate more intense monitoring and control of cardiovascular risk factors in adults found to have elevated blood pressure readings exclusively in clinical settings. Muscholl MW, et al. Changes in left ventricular sstructure and function in patients with white coat hypertension: cross sectional survey. BMJ. August 29, 1998;317:565–70. Copyright © 1999 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact [email protected] for copyright questions and/or permission requests. Want to use this article elsewhere? Get Permissions
<urn:uuid:126c51b4-aa15-4b3c-b04c-9d2b34d98b0f>
seed
Next | Previous | Index Feline Hypertrophic Cardiomyopathy (HCM) is far and away the most common form of heart disease in the cat. Diagnosis of HCM means that there is a primary disease process causing the myocytes of the heart to behave inappropriately, and leads to enlargement of the heart, primarily of the left ventricle (the main muscular chamber that pumps blood to the body). Secondary hypertrophic diseases of the heart may be caused by hyperthyroidism or hypertension, and lead to signs that mimic HCM, but if addressed early may be reversible by treating the underlying condition. Primary HCM is not reversible, and has been shown to have a genetic link, particularly in Main Coons. Unfortunately, genotyping is not yet available. It is not yet possible to isolate the gene that causes HCM in cats, but through studying family trees, it has been shown to be an autosomal dominant gene in some Main Coons and likely other breeds as well. In cats, HCM presents with a high degree of phenotypic heterogeneity from patient to patient. In all cases, the left ventricle develops concentric hypertrophy, but this is not always uniform or symmetric. The areas that may be affected are the septum, the free wall of the ventricle, or only sections of either or both. It is also possible that only the myocytes of the papillary muscles (which control the mitral valve) will be affected. In all cases, diastolic (filling) function is compromised. The result is a heart chamber that cannot receive as much blood as it needs to. This is referred to as a decreased pre-load. A decreased pre-load causes a decreased stroke volume. In order to compensate for a decreased stroke volume and maintain an adequate total cardiac output, the heart must pump harder and faster to feed the body. This basically results in a snowball effect, where the compensation mechanism actually reduces compliance and pre-load even more, while increasing the need for a higher stroke volume to feed the overworking heart itself. The compensation mechanism is good for the patient in the short term, but in the long run will usually snowball into a congestive heart failure if left untreated. Approximately 10% of HCM patients progress to a "burnout" phase where the left ventricle "quits" and dilates (causing systolic dysfunction), but the vast majority will show such extensive thickening in an effort to pump harder that it stiffens and can no longer stretch to receive the blood from the atrium. The backup of blood in the atrium can cause blood stasis, which allows for clot formation and possible thromboembolism, leading to sudden paralysis or death. Backup in the lungs under high pressure has no where to go but leak out of the vessels, causing pulmonary edema. The prognosis of the HCM patient is as varied as the phenotypic presentations, and the treatment can be complex. One type of HCM that should be mentioned briefly, before discussing treatment, is Hypertrophic Obstructive Cardiomyopathy (HOCM). This form of HCM shows both diastolic and systolic dysfunction, and occurs when the myocytes that hypertrophy reduce the diameter of the outflow tract dynamically or statically. When this happens, stroke volume is reduced because the heart is trying to push the blood out a smaller hole. The result is the same. The heart must pump harder and faster to bring total cardiac output up to feed the body as well as feed the heart itself that has ever increasing oxygen demand as its workload increases. Treatment of asymptomatic patients with HCM is controversial. Many veterinarians and researchers have concluded that it is important to diagnose HCM early through echocardiography, EKG, and blood troponin levels, and treat the patient with cardioprotective drugs. Since the first sign of HCM in approximately 30% of cats is sudden death, this opinion is gaining popularity. In the obstructive form, HOCM, the beta blocker atenolol has been widely accepted as the drug of choice. Atenolol specifically inhibits sympathetic activation of the receptors in the heart that directly increase contractility and heart rate. This is the same receptor that is over stimulated in hyperthyroid patients, so atenolol is a good choice for HCM, HOCM, and also secondary hypertrophy caused by hyperthyroidism. Especially important in HOCM, atenolol reduces dynamic outflow obstruction. In all hypertrophic disorders, the use of atenolol slows the snowball effect of excessive hypertrophy and thickening of the heart, and prolongs the time it takes to reach congestive heart failure. After administration of beta blockers, peripheral blood pressure decreases, which reduces the afterload the heart must overcome with each stroke, increases ejection fraction, and serves as a less damaging compensatory mechanism to increase cardiac output. Finally, atenolol reduces the oxygen demand of the myocytes by decreasing heart rate and contractility, which decreases the need for a high cardiac output to deliver oxygen. All beta blockers are considered to be cardioprotective, because they protect the heart muscle itself from injurious effects of the sympathetic nervous system on the heart. However, beta blockers should NEVER be given to a patient that is suffering from congestive heart failure until the edema and effusion have been controlled, first. Angiotensin Converting Enzyme Inhibitors (ACEI's), such as enalapril, are also considered to be cardioprotective. This class of drugs is indicated in patients suffering from congestive heart failure (CHF), and may also be helpful in asymptomatic patients (still controversial). When treating CHF cats, enalapril is usually combined with the diuretic furosemide to clear edema or effusion. The mechanism of action of ACEI's is to inhibit the formation of angiotensin II, which is a step in a multistage pathway called the renin-angiotensin-aldosterone system or RAAS. The general effects of RAAS are to vasoconsrict (increase blood pressure and increase renal blood flow), increase stimulation of the sympathetic nervous system (increase heart rate), increase aldosterone release (antidiuretic effects to increase blood volume and blood pressure), and increases fibrin formation (clots and scar tissue in the heart). By inhibiting this pathway, the result is a reduction in all of the above. Lowered arterial blood pressure, lowered fluid retention (especially when combined with a diuretic), lowered sympathetic stimulation, and reduced fibrosis of the heart occur with the use of enalapril. The reduced fibrosis of the heart is especially useful in reducing excessive stiffening of the chamber walls. Fibrosis that has already occurred is not reversible, but the progression may be slowed. There are contraindications for the use of ACEI's, however. Some patients are dependent on RAAS to maintain renal blood flow and glomerular filtration, and may be thrown into an ACEI induced renal failure. ACEI's have a valuable use in a high proportion of patients, but should be used cautiously for this reason. For a patient that presents with acute CHF, the initial in hospital therapy is to reduce stress and avoid unnecessary procedures while allowing three important treatments to work. Furosemide (diuretic) reduces pulmonary edema, oxygen therapy reduces the need for high cardiac output, and nitroglycerin is used as a powerful vasodilator to reduce hypertension. This triple combination is commonly used to stabilize a patient with acute CHF due to HCM. Mild sedation is often used to decrease stress and heart rate. Once the patient is stabilized, then chronic treatment protocols can be determined. At this point, it is complex because of the phenotypic variations from patient to patient. There is no single home care regimen that is appropriate for each cat. Furosemide and ACEI's are very common, but both have indications and contraindications, and both must be monitored carefully to avoid overdose effects. Atenolol is widely accepted as a helpful therapy in HCM and HOCM once CHF has been controlled with furosemide and enalapril, but not in cats with concurrent renal failure because atenolol is primarily cleared by the kidney. In cats that suffer from thromboemboli, aspririn or clipidogrel are often used, but must be used cautiously and sparingly, and may not be mixed with many other medications. Aspirin should never be given to a cat without a prescription, in any case. A calcium channel blocker such as diltiazem may be selected to increase diastolic function (increase relaxation). If the disease has progressed to the point that the atrium is so dilated that it cannot pump and force blood into the resistant ventricle, then a positive inotropic drug such as digoxin may be necessary, in spite of the fact that positive inotropes increase contractility and are not cardioprotective. Patients that have advanced in this way have an uncertain prognosis. Chronic treatment of HCM and CHF is complex and varied depending on the specific signs exhibited by the patient, and by the specific findings on advanced diagnostics, especially doppler echocardiography. Treatment of the asymptomatic cat is controversial, but once any signs begin, it is widely accepted that cardioprotective drugs such as beta blockers and ACE inhibitors have cardioprotective effects. All treatments must be carefully tailored to the specific signs and symptoms of each patient, and all treatments must be monitored and titrated to effect.
<urn:uuid:b10c7a65-3c22-458c-be76-69f7d4aacae5>
seed
Calculate estimated GFR (eGFR) from serum creatinine levels to assess kidney function. Use of any serum creatinine-based estimate requires that kidney function be at a steady state. eGFR should be used with caution in acutely ill or hospitalized patients who may exhibit rapidly changing kidney function. Patients under the age of 18: Calculate eGFR using the Schwartz equation for patients under age 18. Caution: It is important to know the method used to measure creatinine in a blood, serum, or plasma sample, as it will affect the formula for estimating GFR in children. Determine which calculator to use. Because mild and moderate kidney injury is poorly inferred from serum creatinine alone, NKDEP strongly recommends the use of either the MDRD Study or CKD-EPI equation to estimate GFR from serum creatinine in adults. NKDEP also encourages clinical laboratories to routinely estimate GFR and report the value when serum creatinine is measured for patients 18 and older, when appropriate and feasible. A laboratory that reports eGFR numeric values > 60 mL/min/1.73 m2 should use the CKD-EPI equation, because the CKD-EPI equation is more accurate for values > 60 mL/min/1.73 m2 than is the MDRD Study equation. However, the influence of imprecision of creatinine assays on the uncertainty of an eGFR value is greater at higher eGFR values and should be considered when assessing eGFR values > 60 mL/min/1.73 m2. Estimated glomerular filtration rate (eGFR) calculated using either the MDRD Study equation or the CKD-EPI equation is an estimate of GFR, not the actual GFR. Both equations were derived from large population studies and will generate an estimate of the mean GFR in a population of patients with the same age, gender, race, and serum creatinine. However, the actual GFRs of those individuals will be distributed around that eGFR. An analogous estimate would be the estimated date of confinement for a pregnant woman based on her last menstrual period. This is the best estimate of the delivery date but, in fact, only a small minority of women actually deliver on that date. When Not to Use the Creatinine-based Estimating Equations: Although the best available tool for estimating kidney function, eGFR derived from the MDRD Study or CKD-EPI equations may not be suitable for all populations. All creatinine-based estimates of kidney function are only useful when renal function is stable. Serum creatinine values obtained while kidney function is changing will not provide accurate estimates of kidney function. Additionally, the equations are not recommended for use with: Application of either the MDRD Study or CKD-EPI equation to these patient groups may lead to errors in GFR estimation. GFR-estimating equations have poorer agreement with measured GFR for ill hospitalized patients and for people with near normal kidney function than for the patients in the MDRD or CKD-EPI Study. Page last updated: April 28, 2015
<urn:uuid:8dcbfad5-09dd-45a5-878f-a1f45787960f>
seed
School of Anatomy and Human Biology - The University of Western Australia Blue Histology - Vascular System more about ..... by Professor John McGeachie All blood vessels and lymphatics are lined by endothelial cells; the layer being called the endothelium. These extraordinary cells were once considered to be simple lining cells with very few functional roles, other than to keep cells within the blood from leaking out of the vessels. However, for some years research on endothelial cells has revealed that they have an amazing array of functional and adaptive qualities. Moreover, they are the key determinants of health and disease in blood vessels and play a major role in arterial disease. Endothelial cells are very flat, have a central nucleus, are about 1-2 µm thick and some 10-20 µm in diameter. They form flat, pavement-like patterns on the inside of the vessels and at the junctions between cells there are overlapping regions which help to seal the vessel (see Figure 1, a surface preparation of arterial endothelium). The blue endothelial nuclei are obvious and the cell boundaries are stained black with silver nitrate. The dark lines show the intercellular junctions. These intercellular junctions are critical for the integrity of the vessel. Figure 2 shows an electron micrograph of two endothelial cells overlapping. Toxic substances, such as nicotine, open up these junctions and allow large molecules to pass through the wall. Thus such toxins can potentiate degenerative changes in blood vessels and lead to serious disease (known as vascular disease). The cytoplasm is relatively simple with few organelles, mostly concentrated in the perinuclear zone. The most obvious feature is the concentration of small vesicles (pinocytotic vesicles) adjacent to the endothelial cell membranes. This is a mechanism for passing materials, especially fluid, across the cells from the blood stream to the underlying tissues. Gases simply diffuse through very rapidly, and this is exemplified in the lung capillaries where there is very efficient movement of gases (carbon dioxide, oxygen and anaesthetics etc). In some capillaries the endothelium is punctuated and has apparent holes or fenestrae ("windows") which allow for the rapid passage of large molecules (as in endocrine glands) or huge amounts of fluid (as in the kidney). In other organs the endothelium is very tightly sealed and there is highly selective transport across the membranes, as occurs in the brain as part of the blood-brain barrier. Endothelial cells are selective filters which regulate the passage of gases, fluid and various molecules across their cell membranes. Different organs have different types of endothelium: some leaky and some very tightly bound. The most exciting development in our understanding of endothelial cells concerns the knowledge of the cell surface molecules. These act as receptors and interaction sites for a whole host of important molecules, especially those that attract or repel white blood cells (leucocytes). Leucocyte adhesion molecules are important in inflammation and whilst these are normally repelled by endothelium, in order to allow the free flow of blood cells over the surface, in inflammatory states the leucocytes are actually attracted to the endothelium by adhesion molecules. They then pass through the endothelial cells by a process called diapedesis, which literally means "walking through" (the endothelial cell). Many leucocytes pass through endothelial cells, especially in capillaries as part of their normal life cycle, so that they can monitor foreign agents (antigens) in the tissues. Macrophages (scavenger cells) are mostly derived from monocytes, which are leucocytes produced in the bone marrow, travel in the blood and pass through endothelial cells to gain access to various tissues of the body. Another vitally important molecule synthesized in endothelial cells is Factor VIII, or von Willibrand's Factor. This is essential for the blood clotting reaction (haemostasis). In some unfortunate individuals there is a genetic absence of this factor, and it leads to the life-threatening disorder called haemophilia, where the sufferer may bleed to death from a simple scratch. Fortunately, Factor VIII can be administered to haemophiliacs at a time of crisis and the condition controlled. Endothelial cells are also responsive to local agents such as histamine, which is released when local tissues are damaged. Consequently, the endothelial cells open up their intercellular junctions and allow the passage of large amounts of fluid from blood plasma so that the surrounding tissues become engorged with fluid and swollen: a condition called oedema. At the same time large numbers of leucocytes, escape and flood into the tissues. These events are the hallmarks of the inflammatory response. It is exemplified by a simple scratch on the skin or a splinter wound: the area quickly becomes reddened (opening up of capillaries) and swollen (oedema). Arterial disease is one of the greatest causes of morbidity (tissue damage and disability) and mortality (death) in the Western World. Whilst the incidence of death from this condition is decreasing (due to better education, diet, smoking reduction and life-style changes), it is still a major health problem. Endothelial cells play a crucial role in the initiation (pathogenesis) of arterial disease. The commonest form of disease is arteriosclerosis (literally "hardening of the arteries"). This may be due to a number of factors but the most common is the deposition of cholesterol in the sub-endothelial layer of arteries. It has long been realised that the endothelial cells become "injured" either physically by abrasion or toxic insult (such as from nicotine) and large molecules, which are normally confined to the blood, are allowed to escape through the endothelium and become lodged in the smooth muscle cells in the arterial wall. Macrophages also pass through and accumulate fat (lipid and cholesterol) deposits. The most common name for this disease is atherosclerosis. This process is very slow, but there is a gradual accumulation of this fatty and fibrous material which not only makes the normally elastic artery hard (sclerotic) but the deposits, known as "plaques" may encroach on the arterial lumen and cause turbulent blood flow. This may lead to a narrowing of the artery (stenosis) and facilitate the formation of a blood clot (thrombosis). Such serious consequences occur in "heart attacks" where the coronary arteries become narrowed or blocked. Likewise, in some strokes where arteries in the brain (cerebral arteries) become blocked by blood clots or atherosclerotic debris. This may originate from the large arteries in the neck (carotids) which shed off material from a plaque which rapidly passes into the cerebral arteries and blocks the vessel which has the same lumenal size as the debris. This causes death of the part of the brain normally supplied by the artery and the damage will depend upon which part of the brain is affected. Another common degenerative condition in arteries is called an aneurysm, where the vessel wall becomes weakened by disease or develops as a result a genetic deficiency in the structure. The artery becomes swollen and may rupture, causing a major crisis if the ruptured aneurysm occurs in a vital vessel, such as the aorta or a cerebral artery (see Figure 3, a radiograph of an aneurysm in an artery: the aneurysm is the black swelling in the centre of the picture). In summary, endothelial cells play a vital role in the health and integrity of every tissue of the body because, apart from cartilage, every cell lies within a few µm of a capillary. These fine vessels are only 10-15 µm and consist merely of endothelial cells and a very fine layer of adjacent material, known as the basal lamina. page contents: John McGeachie page construction: Lutz Slomianka last updated: 02/09/98
<urn:uuid:d6fdfbb6-2257-45e2-b8e6-0174d63787cf>
seed
Aug. 12, 2010 Current tools for combating malaria, such as artemisinin-combination therapy and increasing coverage of long-lasting insecticide bednets can result in major reductions in Plasmodium falciparum malaria transmission and the associated disease burden in Africa. Furthermore, if such interventions can be rolled out to achieve a comprehensive and sustained intervention program, a parasite prevalence threshold of 1% may be achievable in areas where there is a low- to moderate transmission of malaria and where mosquitoes mainly rest indoors. These are the findings from a modeling study by Jamie Griffin and colleagues from Imperial College London and the London School of Hygiene and Tropical Medicine, published in PLoS Medicine. The authors reached these conclusions by developing a mathematical simulation model for P. falciparum transmission in Africa, which incorporated three major types of mosquito, parasite prevalence data in 34 areas of Africa with differing P. falciparum malaria transmission levels, and the effect of switching to artemisinin-combination therapy and increasing coverage of long-lasting insecticide treated bednets. Then the authors explored the impact on transmission of continued roll-out of long-lasting insecticide treated bednets, additional rounds of indoor residual spraying, mass screening and treatment and a future vaccine in six representative settings with varying transmission intensity with the aim of reaching a realistic target of 80% coverage. The model predicted some success in low and moderate transmission settings but in high-transmission areas and those in which mosquitoes are mainly outdoor-resting, additional new tools that target outdoor-biting mosquitoes and substantial social improvements will be required as higher levels of intervention coverage are unrealistic. The authors say, "Our model is necessarily a simplification of the more complex dynamics underlying malaria transmission and control, so numerical results should be interpreted more as providing intuitive insight into potential scenarios than as firm predictions of what might happen in a given setting." This work was funded by the Bill & Melinda Gates Vaccine Modeling Initiative, the UK Medical Research Council, Microsoft Research, and the TransMalariaBloc European Commission FP7 Collaborative project (HEALTH-F3-2008-223736). TDH is funded by an Imperial College Junior Research Fellowship. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Griffin JT, Hollingsworth TD, Okell LC, Churcher TS, White M, et al. Reducing Plasmodium falciparum Malaria Transmission in Africa: A Model-Based Evaluation of Intervention Strategies. PLoS Medicine, 2010; 7 (8): e1000324 DOI: 10.1371/journal.pmed.1000324 Note: If no author is given, the source is cited instead.
<urn:uuid:abe162ac-0366-4aa0-8e26-dc0ad3c539ae>
seed
Killing cancer cells: How anti-tumour agent works Shedding light on the disease-fighting properties of Bleomycin, used in medications to treat a variety of cancers, a study by an Indian-origin scientist has revealed that the anti-tumour agent has the ability to cut through double-stranded DNA in cancerous cells, like a pair of scissors. Bleomycin is part of a family of structurally related antibiotics produced by the bacterium, Streptomyces verticillus. However, bleomycin may also cause severe or life-threatening lung problems, especially in older patients and in those receiving higher doses of this medication. The new research is expected to help inform efforts to fine-tune the drug, improving its cancer-killing properties, while limiting toxicity to healthy cells. Currently, three potent versions of the drug, labeled A2 , A5 and B2 are used in clinical use against cancer. Bleomycin's cancer-fighting capacity was first observed in 1966 by Japanese researcher Hamao Umezawa. Basab Roy, a researcher at Arizona State University in US, is particularly interested in the subtle biochemistry of bleomycin, including the specificity of its binding regions along the DNA strand and the drug's detailed mechanisms of DNA cleavage. Cleavage of DNA is believed to be the primary mechanism by which bleomycin kills cancer cells, particularly through double-strand cleavages, which are more challenging for the cellular machinery to repair. "There are several mechanisms for repairing both single-strand and double-strand breaks in DNA, but double-strand breaks are a more potent form of DNA lesion," said Roy. For the new study, bleomycin A5 was used. Bleomycin A5 has similar DNA binding and cleaving properties as bleomycin A2 and B2. From a pool of random DNA sequences, a library of 10 hairpin DNAs was selected, based on their strong binding affinity for bleomycin A5. Hairpin DNAs are looped structures, which form when a segment of a DNA strand base-pairs with another portion of the same strand. These hairpin DNAs were used to investigate double-strand cleavage by bleomycin. Each of the 10 DNA samples underwent double-strand cleavage at more than one site. Further, all of the observed cleavage sites were found within or in close proximity to an 8 base pair variable region, which had been randomised to create the original library. Examination of the 10 DNA samples exposed to bleomycin revealed a total of 31 double-strand cleavage sites. This study proposed for the first time, a plausible mechanism for DNA cleavage by bleomycin that may lead to tumor cell killing as well as identifying the most common sequences involved in DNA site binding and subsequent strand breakage. The results appeared in the Journal of the American Chemical Society. (Posted on 01-05-2014)
<urn:uuid:f2480bd2-415c-41c9-8d02-32ca6abbe9b9>
seed
Published on 25 April 2012 Background and defintion: - Urinary frequency and nocturia are common lower urinary tract symptoms (LUTS). - Nocturia is the number of voids recorded during a night's sleep: where each void is preceded and followed by sleep. - Nocturnal polyuria means overnight urine production where it exceeds 20% to 33% of total 24-hour volume and is age dependent. - Polyuria in a 70-kg adult is diagnosed by a 24-hour voided volume in excess of 2.8 L (>40 mL/kg). - First morning void after night’s sleep, Volume: counted in nocturnal voided volume - Episode: counted in diurnal and daytime, frequency, not nocturnal. Prevalence - Nocturia - Overall prevalence of nocturia for 5500 people in the Boston Area Community Health (BACH) study was 28.4%, affecting 25.2% of men and 31.3% of women - Prevalence of nocturia was 48.6% of men and 54.5% in the EPIC study in Europe and Canada, evaluating 19,000 adults with or without overactive bladder (OAB). - Aging has an impact on nocturia. - In BACH Nocturia prevalence is 19.9% in 30 - 39 year olds and 41.2% in 60 - 79 years of age. - Above the age of 60 prevalence is similar in both men and women. - The prevalence of twice nightly or greater nocturia among men between 70 and 79 is nearly 50 percent. Clinical Evaluation - Nocturia - Evaluation includes a patient history to assess sleep, fluid intake and medications used. - Physical examination - Lower Urinary Tract conditions that impair bladder capacity, including overactive bladder (OAB). - Evaluate other health conditions that may impact excessive nocturnal urine output, including cardiovascular, endocrine, neurological, and renal impairment - Symptom assessment tools are used to capture subjective elements of nocturia. - A frequency volume chart is a patient’s recorded account of times and volumes of urine voided over several days and nights. - The purpose is to obtain an objective record of one aspect of lower urinary tract symptoms to augment the history and other patient-recorded information, such as the international prostate symptom score (IPSS). - Men who once had frequency-volume chart-nocturnal voiding frequency less than International Prostate Symptom Score-nocturnal voiding frequency are more likely to have this again. - The International Prostate Symptom Score (IPSS) tool has one question reporting on nocturia. - The patient has to indicate how many times over the past month he typically has had to get up to urinate from the time he went to bed at night until the time he got up in the morning. This can range from none (score 0) to five or more times (score 5) per night. - A weak association has been found between the IPSS for nocturia, urgency, and counterparts on a frequency–volume (FV) chart. - Therefore, frequency-volume charts as well as the International Prostate Symptom Score should be used when evaluating nocturia. - Fluid intake is more difficult to measure accurately. The patient and the continence adviser can clarify details of type and quantity of fluid intake as well as other factors as they study the frequency volume chart together. - A urinary frequency–volume chart (FVC) is used to help distinguish and diagnose: - Frequency: high frequency with normal 24-hour volume suggests that the bladder capacity is diminished (the male bladder normally holds 300–600 mL urine comfortably). - Polyuria: passing more urine than usual (up to 3 L of urine in 24 hours is normal). - Nocturia: waking at night to urinate. - Nocturnal polyuria: passing, at night, more than 35% of the 24-hour urine production. - The frequency volume chart alone can give immediate insight into several problems underlying lower urinary tract symptoms, but is always used in combination with measures of urinary voiding flow rates and ultrasound-estimated post-void residual bladder volume, sometimes progressing to functional urodynamic measures if necessary. - Frequency volume charts have been developed in different ways over the past 30 years so that there is no standard validated format. - The design should accommodate ease of use for the patient and adviser. - Although high fluid intake may be the result of choice or habit, it may indicate significant underlying causes, such as diabetes mellitus or insipidus. - Reduced fixed volumes during the day and night may indicate a small fixed bladder capacity due to a serious underlying condition, such as interstitial cystitis or bladder carcinoma in situ. - Reduced variable volumes during the day and night often indicate detrusor instability, while normal early morning volumes with reduced and variable day volumes may indicate psychosomatic causes or be associated with genuine stress incontinence, when the patient voids urine more frequently to avoid stress leaks with a larger bladder volume. - Recording over one week takes into account the effect of variations in peoples’ daily life and activities through the week and weekend. - Improved sleep environment, self-help, depression management, cognitive behavior therapy, relaxation techniques, and gentle exercise are used to manage co-morbid insomnia. - Major neurologic diseases that can contribute to nocturia are prioritized in management. - Volume, nature, and timing of fluid and solute intake can be adjusted. - For nocturia-predominant LUTS in men, comprehensive evaluation is necessary before considering surgery to reduce the risk of a poor symptomatic outcome. - The benefits of sedative agents may be in improving return to sleep, rather than nocturia frequency. - Treatment with melatonin can counteract sleep impairment and reduces nocturia in some patients. - Desmopressin can achieve significant improvement in nocturia symptoms for the isolated symptom and when nocturia is a component of OAB. - Dilutional hyponatremia occurs in 7.6% of patients taking desmopressin. - Abrams P, Cardozo L, Fall M, et al: The standardisation of terminology of lower urinary tract function: report from the Standardisation Sub-committee of the International Continence Society. Neurourol Urodyn 2002; 21(2):167-178. - Aslan D, Aslan G, Yamazhan M, et al: Voiding symptoms in pregnancy: an assessment with international prostate symptom score. Gynecol Obstet Invest 2003; 55(1):46-49. - Coyne KS, Sexton CC, Thompson CL, et al: The prevalence of lower urinary tract symptoms (LUTS) in the USA, the UK and Sweden: results from the Epidemiology of LUTS (EpiLUTS) study. BJU Int 2009; 104(3):352-360. - Fitzgerald MP, Litman HJ, Link CL, McKinlay JB: The association of nocturia with cardiac disease, diabetes, body mass index, age and diuretic use: results from the BACH survey. J Urol 2007; 177(4):1385-1389. - Garfinkel D, Laudon M, Nof D, Zisapel N: Improvement of sleep quality in elderly people by controlled-release melatonin. Lancet 1995 Aug 26; 346(8974):541-544. - Irwin DE, Milsom I, Hunskaar S, et al: Population-based survey of urinary incontinence, overactive bladder, and other lower urinary tract symptoms in five countries: results of the EPIC study. Eur Urol 2006; 50(6):1306-1314. - Irwin DE, Milsom I, Kopp Z, et al: Prevalence, severity, and symptom bother of lower urinary tract symptoms among men in the EPIC study: impact of overactive bladder. Eur Urol 2009 Mar 3. - Lose G, Mattiasson A, Walter S, et al: Clinical experiences with desmopressin for long-term treatment of nocturia. J Urol 2004; 172(3):1021-1025. - Matthiesen TB, Rittig S, Mortensen JT, Djurhuus JC: Nocturia and polyuria in men referred with lower urinary tract symptoms, assessed using a 7-day frequency-volume chart. BJU Int 1999; 83(9):1017-1022. - Rembratt A, Norgaard JP, Andersson KE: What is nocturnal polyuria?. BJU Int 2002; 90(Suppl. 3):18-20. - Swithinbank LV, Vestey S, Abrams P: Nocturnal polyuria in community-dwelling women. BJU Int 2004; 93(4):523-527. - Van Kerrebroeck P, Abrams P, Chaikin D, et al: The standardization of terminology in nocturia: report from the standardization subcommittee of the International Continence Society. BJU Int 2002; 90(Suppl. 3). - van Kerrebroeck P, Rezapour M, Cortesse A, et al: Desmopressin in the treatment of nocturia: a double-blind, placebo-controlled study. Eur Urol 2007; 52(1):221-229. - Wein A, Lose GR, Fonda D: Nocturia in men, women and the elderly: a practical approach. BJU Int 2002; 90(Suppl. 3):28-31.
<urn:uuid:6b549d61-3e3e-48d9-babe-745dd5ea2086>
seed
Activated charcoal is produced by pyrolysis of organic material, such as wood, and an activation process which cleanses and fragments the charcoal by exposure to an oxidizing gas compound of steam, oxygen, and acids at high temperatures resulting in increased surface area through the creation of numerous external and internal pores. These pores serve as reservoirs to adsorb substances admixed with activated charcoal, making it a useful adsorbent for specified toxins. Activated charcoal is pharmacologically inert and is not absorbed in the gastrointestinal tract. Activated charcoal will adsorb a variety of organic and inorganic substances, but is especially effective in adsorbing compounds within a molecular weight range of 100 to 1,000 Daltons (AMU's).1Several other physiologic and physicochemical factors influence the adsorptive capacity of activated charcoal including: pH, charcoal:drug ratio, gastric contents, and adsorption kinetics.2 Much of the published scientific literature which studied the adsorptive capacity of activated charcoal was conducted using in vitro models. A considerable amount of the research may be invalid since the effects of physiologic pH were not taken into consideration or held constant. Using research models which simulate the gastric environment, some toxins were not adsorbed by activated charcoal. This data was inappropriately extended to imply that activated charcoal did not adsorb a toxin and therefore had no efficacy in the management of that type of poisoning incident. However, the research failed to consider that the increased pH of the small intestine provides a receptive environment for the adsorption of the toxin by activated charcoal. Activated charcoal will effectively adsorb acidic, alkaline and neutral substances (not to suggest use of activated charcoal in poisonings caused by corrosive agents2). The extent of adsorption will be dependent upon the relative solubility of the drug at a specified pH. The optimal dosage ratio of activated charcoal to toxin is described as 10:1.3,4,17,18 Numerous factors contribute to and interfere with adsorptive capacity, therefore, the 10:1 ratio may not be valid in the clinical setting. Furthermore, a primary application of activated charcoal is in adult patients who have intentionally ingested a toxin for drug abuse or suicidal purposes. These patients may not freely provide information about the substance or amount ingested, or have a decreased level of consciousness. Under these conditions it is difficult to determine the ingestion history, making it impractical to use the 10:1 ratio. The 10:1 ratio is also impractical when large amounts of toxin have been ingested (i.e., an overdose of 50 gm of aspirin would then require 500 gm [1.1 lbs.] of activated charcoal). Gastric contents may also compete with ingested toxins and compromise the adsorption of the toxins by activated charcoal.2 If activated charcoal is to be administered to a patient known to have ingested a large meal in close proximity to the time of treatment, a larger dose of activated charcoal may be appropriate. Under appropriate physiologic conditions activated charcoal adsorbs toxins instantaneously. This adsorptive process is reversible and an equilibrium between free and bound toxin will exist. According to the law of mass action the amount of free drug decreases as the dose of activated charcoal increases. Therefore, large doses of activated charcoal can favor the equilibrium toward greater toxin adsorption and efficacy. There is limited evidence that desorption of a toxin from activated charcoal may occur.19 Therefore, there is a potential for toxin readsorption and enhanced toxicity. The current standard of care is to administer a cathartic with single doses of activated charcoal to hasten the elimination of the toxin/activated charcoal complex from the gastrointestinal tract.5 Cathartics should be used with extreme care during multiple dose activated charcoal therapy and it is not recommended to use a cathartic with each dose of activated charcoal.5 Sorbitol is a hexahydric sugar alcohol which primarily serves as an osmotic cathartic in Actidose® with Sorbitol.6 A secondary advantage of using sorbitol is as a palatability enhancer to decrease the innate gritty texture of activated charcoal and to provide a sweet vehicle to increase patient compliance. Sorbitol is poorly absorbed during its transit through the gastrointestinal tract. Absorbed sorbitol is metabolized by the liver and slowly converted to fructose. Insulin is not necessary for intracellular transport of sorbitol, therefore customary cathartic doses can be safely used by patients with diabetes mellitus. As a hyperosmotic cathartic sorbitol produces a hygroscopic action resulting in increased water in the large intestine and increased intraluminal pressure which stimulates catharsis. Studies have been conducted in healthy adult human volunteers using therapeutic amounts of activated charcoal and sorbitol.8,9 Catharsis of activated charcoal occurred in an average of 1.0 - 1.5 hours and persisted for 8 - 12 hours. Fourteen poisoned patients are reported in one series representing a wide range of toxins and dosages of sorbitol resulting in the onset of catharsis in an average of 7.7 hours.10 The onset of action may be expected to be longer in patients who have ingested toxins which decrease bowel motility, such as pharmacological agents and plants with anticholinergic properties, and drugs like narcotics.20 Sorbitol does not compromise the adsorptive capacity of activated charcoal.11,12
<urn:uuid:fbb6d32b-230f-4f65-8f4d-b536997ce303>
seed
BROWSE ALL ARTICLES BY TOPIC From: The eUpdate, 12.3.2013 Do Imported Spices Pose a Health Risk? FDA’s recent report aids in the development of plans to reduce or prevent illness from spices contaminated by microbial pathogens and other impurities In response to concerns over the effectiveness of current control measures to reduce or prevent illness from consumption of spices in the U.S., the FDA released its report "Draft Risk Profile: Pathogens and Filth in Spices" on October 30th. What followed was a string of media coverage alerting the public that their spices can contain anything from whole insects to rodent feces. In fact, the reported noted that about 12 percent of spices brought to the U.S. are contaminated with insect parts, whole insects, rodent hairs, and other things. Following the release of the FDA report, American Spice Trade Association (ASTA) quickly assured American consumers that the spices sold under “reputable and trusted brands at their local grocery store are clean and safe to enjoy.” According to a statement from ASTA, “For the draft risk profile, the FDA used sampling and testing at ports of entry into the U.S. and reported on its findings of pathogens such as Salmonella, and filth, such as insects and animal hair, in spices. Much of the spice presented at import is essentially a raw agricultural commodity that will undergo extensive cleaning, processing and treatment for pathogens once it enters the U.S. to ensure it is clean and free of microbial contamination.” The risk profile identifies the most commonly occurring microbial hazards and filth in spices and quantifies, where possible, the prevalence and levels of these adulterants at different points along the supply chain. It also identifies potential sources of contamination throughout farm-to-table food safety and evaluates the efficacy of current mitigation and control options designed to reduce the public health risk. “Nearly all of the insects found in spice samples were stored product pests, indicating inadequate packing or storage conditions,” according to the draft report. The FDA identified three illness outbreaks attributed to spices in the U.S. in the 37 year period from 1973 to 2010. However, this relatively small number maybe due to the public's tendency to eat small amounts of spices with meals, lowering the probability of illness from contaminated spices. It’s also possible that illnesses caused by contaminated spices are underreported because of the challenges in attributing minor ingredients in multi-ingredient foods. The risk profile is being used to provide information for FDA to use in the development of plans to reduce or prevent illness from contaminated spices. The comment period for the risk profile ends on January 3, 2014. The FDA is also working with Codex Alimentarius, an international commission that sets food standards, guidelines, and codes of practice. FDA scientists who serve as delegates to the Codex Committee on Food Hygiene co-chair (with a representative from India) a working group of the Committee, which has been charged to revise the Code of Hygienic Practices for Spices and Dried Aromatic Herbs. In addition, FDA scientists also will participate in the newly formed Codex Committee on Spices and Culinary Herbs. The committee will be hosted and chaired by India—spice imports from India and Mexico have been found to have the highest rate of contamination.
<urn:uuid:547c0812-15a3-4b51-95d7-32856e744403>
seed
Cancer is a scary word, but as you have learned by now, words give you the information you need to make knowledgeable decisions in consultation with your family physician and oncologist (cancer specialist). Many cancer terms are unique to the field of oncology (study of tumors) and don’t lend themselves easily to the prefix, root, suffix system used in the previous modules. Instead, terms will be grouped and defined in broad categories such as tumor types, causes and treatments. In place of a quiz there will be a simulated case that reinforces frequently used terms. Cancer buzz words Malignant vs. benign (literally, “evil” versus “good”) Tumors are masses of cells that have slipped the bonds of control of cell multiplication. Malignant tumors, cancers, are life-threatening because they are invasive (spread into surrounding organs) and metastasize (travel to other areas of the body to form new tumors). Specifically, invasiveness results in penetration, compression and destruction of surrounding tissue causing such problems as loss of organ function (liver, kidneys), difficulty breathing (lungs), obstruction (intestines), possible catastrophic bleeding and severe pain. Carcinoma is the most common form of cancer. Carcinoma develops from sheets of cells that cover a surface (example: skin) or line a body cavity (example: glandular lining of stomach). Some names for tumors of this type would be: adenocarcinoma of the prostate, adenocarcinoma of the lung, gastric adenocarcinoma, hepatocellular carcinoma (what organ is involved?). Note that the term carcinoma typically appears in the name. A rare form of cancer arises from connective and supportive tissues, examples: bone, fat, muscle, and other connective tissues. Some names of this type of tumor would be: osteosarcoma (malignancy of bone), liposarcoma (fat) and gastrointestinal stromal tumor. Note that the term sarcoma does not always appear in the name. Grading and staging Tumor biopsies (tissue samples) are examined microscopically to determine the type and degree of development. A grading scale is used, usually Grade I to Grade IV, to describe tissue differentiation. Tumors that are well differentiated (it still looks like the original source tissue) generally have a good clinical outcome. Tumors that are poorly differentiated (the tissue has taken on a more primitive structure and may not resemble its original tissue) generally have a poorer outcome. The clinical stage of a tumor is determined by physical exam (Can you feel the tumor? Can you palpate (feel) lymph nodes? Is the tumor fixed in place (adherent to other structures)? Imaging (CT, MRI) is also an essential tool. The stage of the tumor determines if the tumor has invaded surrounding tissue, involved lymphatics (drainage channels for cell fluids other than blood) and whether the cancer has metastasized to other sites in the body. A staging system using the letters T, N, M is also used in conjunction with Grading. “T” indicates size of tumor; “N” whether the cancer has spread into lymph nodes; “M” whether cancer cells have metastasized to other organs and areas. For example, a melanoma T2N0M0 describes a skin cancer that is between 1.0 and 2.0 mm in thickness, but has not spread into lymph nodes or other areas of the body. Grading and staging tumors are important ways to predict the “prognosis” (progress and outcome of the disease), and which types of treatments may most likely succeed. In general, low grade tumors that have not invaded tissues, have not involved lymph nodes (negative nodes) and have not metastasized would be expected to have a better prognosis than a high grade tumor that has invaded tissues, has invaded lymphatics (positive nodes) and has metastasized. However, the prognosis of any individual patient is much more complicated than described here. Complicating factors include the general health of the patient, the effectiveness of their immune system and available treatment options. Some tumor types are very “aggressive” and are highly resistant to treatment. Causes of cancer Any injury to DNA (the genetic code) may result in the loss of cell cycle control, leading to uninhibited cell division. Carcinogens are cancer causing agents. Broad categories include radiation, chemicals, drugs and viruses. Don’t panic! Your once a year dental X-ray and common cold and flu viruses will not cause cancer. However, excessive radiation from nuclear to sunlight can significantly increase your risk of malignancy. The Human Papilloma Virus (HPV) is the major cause of cervical cancer. Environmental chemicals found in tobacco smoke, automotive exhaust, toxic emissions from factory smokestacks and asbestos exposure are all carcinogenic. Curious about your risk for common cancers? Check in at Your Disease Risk at Washington University School of Medicine.
<urn:uuid:7b4020d0-eea9-4aba-98b7-8a5fae39e48c>
seed
Research roundup: PTSD beyond the DSM-IV criteria By Practice Research and Policy staff Sept. 13, 2012—Posttraumatic stress disorder (PTSD) as defined by the DSM-IV is an anxiety disorder that can occur when an individual has witnessed or experienced a traumatic event. In order to meet the criteria for a diagnosis of PTSD, one must experience a traumatic stressor, accompanied by re-experiencing, numbing/avoidance and hyper-arousal symptoms. Factors including anger, aggression and a desire for revenge are not captured in current diagnostic criteria for PTSD, yet researchers and clinicians alike have noted their prevalence. The research covered in this article addresses these emotions and may help inform treatment beyond use of the DSM-IV. Ciesielski, B. G., Olatunji, B. O., & Tolin, D. F. (2010). Fear and loathing: A meta-analytic review of the specificity of anger in PTSD. Behavior Therapy, 41(1) 93-105. Doi: 10.1016/j.beth.2009.01.004 The authors conducted a meta-analysis of the current literature on anger, PTSD, and anxiety disorders to examine the degree to which anger is more likely to be specifically associated with PTSD than with other anxiety disorders. The analysis found that patients with anxiety disorders had significantly higher levels of anger than controls, with the exception of individuals with social and specific phobia. Since contemporary models often view anger as a multidimensional construct, the analysis also compared patients with anxiety disorders to control samples for differences in specific anger domains such as the tendency to suppress feelings of anger (anger in), the tendency to outwardly express anger toward individuals or objects with verbal or physical aggression (anger out), or the inability to overcome angry feelings (anger control). Also considered were the differences between experiencing present angry feelings linked to a specific situation (state anger) and angry feelings that last over time and are linked to various situations (trait anger). PTSD and non-PTSD anxiety disorder patients had more difficulties with anger control, anger in, and anger out than controls, but did not significantly differ on state and trait anger. Certain anger domains appear to be uniquely associated with PTSD. Anger in PTSD may contribute to greater interpersonal difficulties, greater risk of substance abuse and physical health problems, all of which may require specific intervention in order to enhance treatment outcome. Better clinical understanding of the patient’s unique anger pattern may facilitate more targeted interventions. Lancaster, S., Melka, S., & Rodriguez, B. (2011). Emotional predictors of PTSD symptoms. Psychological Trauma: Theory, Research, and Policy, 3(4) 313-317. Doi: 10.1037/a0022751 One of four criteria required by the DSM-IV for a diagnosis of PTSD is the experience of a traumatic stressor that is accompanied at the time of the event by the emotions of fear, helplessness, or horror. The workgroup report that led to the current DSM-IV PTSD diagnostic criteria noted that other emotional reactions such as guilt, dysphoria and sadness were also more commonly present in those developing PTSD after a traumatic stressor than in those who did not develop PTSD. However, the DSM-IV included only fear, helplessness and horror in the stressor criteria. Subsequent research has examined whether these three emotions are uniquely predictive of PTSD development. The goal of this study was to examine a broader range of emotional predictors of PTSD. An initial pool of 771 undergraduate students in an introduction to psychology course completed a demographics form and the Brief Trauma Questionnaire (BTQ). The results from the administration of this 10-item self-report measure yielded a final sample of 341 participants who had experienced a potentially traumatic event. These participants then completed the PTSD Checklist-Specific (PCL-S), a Likert-style scale of 17 questions designed to measure symptoms of PTSD. The authors found that only anger, guilt, sadness, and disgust were predictive of PTSD. Across all groups who participated in the study, there was a significant relationship between the level of anger experienced and the severity of PTSD symptoms. In both men and women, anger was a strong predictor of PTSD. Experience of other emotions varied by gender; men demonstrated guilt as a unique predictor and women expressed emotions of disgust and sadness at the time of trauma. European-American and African-American participant's emotions varied somewhat from European-American students’ emotions of guilt, helplessness, and disgust, in addition to anger, predicting PTSD, while African-American students’ level of PTSD was predicted only by anger. Both the DSM and the International Classification of Diseases (ICD) are currently undergoing revision and additional emotional reactions may be included in the revised criteria. Additional research is likely needed but, in the meantime, clinicians will want to explore a wide range of emotions beyond fear, helplessness and horror in order to understand clients’ response to the trauma they have experienced. McHugh, T., Forbes, D., Bates, G., Hopwood, M., & Creamer, M. (2012). Anger in PTSD: Is there a need for a concept of PTSD-related posttraumatic anger? Clinical Psychology Review, 32(1) 93-104. Doi: 10.1016/j.cpr.2011.07.013 Two decades of research consistently demonstrated that anger is a critical predictor of PTSD severity and treatment efficacy. In addition, the co-occurrence of anger with PTSD is present across a wide range of populations, including the military, emergency and disaster relief workers, crime victims, and accident survivors. PTSD, particularly when chronic, can be difficult to treat and anger has been shown to reduce the effectiveness of treatment. In this analysis, the authors reviewed studies on visual imagery in the areas of neuropsychology, psychopathology, anger and PTSD and propose that visual imagery is a significant factor in causing and sustaining anger in PTSD. One aim of this review was to investigate whether visual imagery may be the distinct aspect that sets anger in PTSD apart from other types of anger. Research in neuropsychology provides evidence that there is a high level of overlap in areas of the brain that are stimulated during the experience and production of both visual imagery and the emotion of anger. Evidence from research in psychopathology demonstrates that repetitive intrusive visual imagery produces negative emotions in various psychological disorders. The inability to manage negative visual imagery leads to greater mental distress. Studies on anger and PTSD evidence the strength of the relationship to visual imagery, with visual intrusions being described as a core symptom and risk factor of the disorder. The symptoms of physiological arousal that accompany anger cause intrusions and the intrusions yield a higher level of physiological arousal. Visual imagery and anger are intimately and complexly connected. The authors suggest numerous directions for future research about the role visual imagery plays in anger in PTSD. For one, they suggest conducting studies into visual imagery capacity, which may be influenced by factors such as age, gender, culture, and developmental experiences. Another potential avenue for research would be to identify trauma-related factors that may moderate the effect of visual imagery on anger in PTSD. For example, non-trauma related moderators such as the individual’s personality or temperament might affect the role visual imagery plays in their PTSD. Currently, there is no satisfactory model that can define the complex relationship between anger and PTSD. The authors believe inquiry into the role of visual imagery in anger in PTSD could significantly improve our understanding of this anxiety disorder and contribute to more effective treatments that focus on the role visual imagery plays in responses to trauma and assist patients with learning new ways to manage intrusive thoughts. Makin-Byrd, K.N., Bonn-Miller, M., Drescher, K., & Timko, C. (2012). Posttraumatic stress disorder symptom severity predicts aggression after treatment. Journal of Anxiety Disorders, 26(2), 337-342. doi:10.1016/j.janxdis.2011.11.012 While consistent evidence from numerous studies links posttraumatic stress disorder (PTSD) symptoms with aggression, little information exists regarding the relationship between PTSD severity and aggression after the completion of PTSD treatment. This study examined the association between PTSD and aggression by using a longitudinal data set derived from patients whose PTSD severity and aggression were reported before, immediately after and four months after intensive PTSD treatment. A sample of 175 male patients admitted between 2000 and 2007 to a Veterans Affairs (VA) residential PTSD treatment program consented to participate in this study. A self-report measure using the 17 PTSD symptoms included in the DSM-IV was administered to the veterans. The questions represented three subscales that corresponded to symptom clusters of re-experiencing, avoidance/numbing and hyperarousal. The patients also completed an assessment to measure aggression pre-treatment, post-treatment and at a four-month follow-up. There was a significant correlation between aggression and PTSD severity both pre-treatment and post-treatment. Of particular interest was the strong correlation between the hyperarousal symptom cluster and aggression before and after treatment. Monitoring the symptoms of PTSD and aggression during the course of treatment is crucial. Furthermore, aggression may need to be a focus during PTSD treatment. While more evidence with larger, younger patient samples and different traumas is necessary, this study suggests that assessing aggression in the context of PTSD may lead to important information regarding post-treatment functioning. Additional studies to examine the impact of treating aggression on outcome of PTSD care will be important to better understand effective treatment and longer term patient functioning. Kunst, M. J. J. (2011). PTSD symptom clusters, feelings of revenge, and perceptions of perpetrator punishment severity in victims of interpersonal violence. International Journal of Law and Psychiatry, 34(1) 362-367. Doi: 10.1016/j.ijlp.2011.08.003 Many studies have focused on PTSD among victims of violent trauma. A variety of normal emotions are associated with symptoms of PTSD. These include self-oriented emotions such as shame, guilt, self-blame and other-oriented emotions such as the desire for revenge. Feelings of revenge often follow trauma that involves intentional and personal infliction of harm to an individual. Two hundred and thirty-five victims of violent crime were given a three-item revenge scale to measure their feelings of revenge over the previous four weeks. Using the PTSD Symptom Scale, Self-Report (PSS-SR), PTSD symptoms resulting from violent acts were assessed. An additional scale was employed to evaluate the victims’ views on perpetrator punishment severity. Finally, regression analysis was used to determine if the three PTSD symptom clusters – re-experiencing/intrusion, avoidance and/or hyperarousal – might predict feelings of revenge. Of the 235 participants who took the PSS-SR, 139 experienced re-experiencing/intrusion, 107 experienced feelings of avoidance, and 160 experienced feelings of hyperarousal. The re-experiencing/intrusion symptom cluster was the only predictor of a desire for revenge. A possible explanation for the correlation between symptoms of re-experiencing/intrusion and revenge is their shared ruminative character. Repeated, negative thoughts are predictive of both PTSD and revenge. The study failed to find a significant correlation between perceived punishment of the perpetrator and the victim’s feelings of revenge. Given the finding of this study that the re-experiencing/intrusion symptom cluster positively correlates with feelings of revenge, when treating patients with a strong desire for revenge, it may be important to focus on strategies that minimize or control intrusive thoughts or assist the patient in managing re-experiencing symptoms. The authors also suggest investigating whether sensory stimuli that remind patients of their emotional and behavioral responses during the traumatic event may trigger or contribute to the intrusions. Posttraumatic stress disorder is a prevalent and debilitating disorder. Further research into the relationship between PTSD and emotions such as anger, aggression and an inclination for revenge is needed to better understand patients’ reactions to the trauma they have experienced and enhance treatment.
<urn:uuid:2bc0d39b-b63d-42c6-a1fb-470b815fa077>
seed
Every year in the United States, 160,000 cases of colorectal cancer are diagnosed, and 57,000 patients die of the disease, making it the second leading cause of death from cancer among adults, after lung cancer. As researchers and clinicians fervently look for causes and cures for colorectal cancer -- simultaneously generating thousands of studies producing more and more promising results -- Dr. Sanford Markowitz, professor and researcher of cancer and genetics at Case Western Reserve University School of Medicine and oncologist at the Case Comprehensive Cancer Center at University Hospitals Case Medical Center, published his forward-looking view of the "Molecular Basis of Colorectal Cancer" in the Dec. 17, 2009 issue of the New England Journal of Medicine, with co-author, Dr. Monica Bertagnolli, from the Brigham and Women's Hospital, Harvard Medical School. "Today's challenges are to understand the molecular basis of individual susceptibility to colorectal cancer and to determine factors that initiate the development of the tumor, drive its progression, and determine its responsiveness or resistance to antitumor agents," wrote Dr. Markowitz. Key advances that the article singled out toward meeting these goals are: - Discoveries in DNA sequencing technology have made it possible to sequence the entire genome of a human cancer. Colorectal cancer provided the first example of the power of this technology. Sequencing of 18,000 (nearly all) of the known human genes in 35 colon cancers identified 140 as candidate cancer genes that were mutated in at least two colon cancers and that probably contributed to the cancer phenotype. - Biological pathways that are deregulated in colon cancer have been identified, and could now form the basis of new therapeutic agents. Although some high-frequency mutations are attractive targets for drug development, common signaling pathways downstream from these mutations may also be tractable as therapeutic targets. - Studies that aid in the understanding of colorectal cancer on a molecular level have provided important tools for genetic testing for high-risk familial forms of the disease, predictive markers for selecting patients for certain classes of drug therapies and molecular diagnostics for the noninvasive detection of early cancers. - Recent progress in molecular assays for the early detection of colorectal cancer indicates that understanding the genes and pathways that control the earliest steps of the disease, and individual susceptibility, can contribute to clinical management in the near term. For example, patients whose colon cancers have mutations in either RAS or BRAF genes are known not to benefit from treatment with the anti-colon cancer agent Cetuximab. - Moreover, patients with inherited mutations in tumor-suppressor genes, such as APC, MLH1, and MSH2 have a very high risk of colorectal cancer and require early and frequent surveillance for colon cancer and often prophylactic surgery. - Last, the development of molecular diagnostics for the early detection of colorectal cancer is emerging as an important translation of colon-cancer genetics into clinical practice. One example is the development of stool DNA tests to detect cancer-associated aberrant DNA methylation as a method for early detection of patients with colorectal cancer or advanced adenomas. Stool DNA testing for colorectal cancer has been added to the cancer-screening guidelines of the American Cancer Society. Dr. Markowitz and Bertagnolli's concluding observations are optimistic ones that the considerable recent and ongoing advances in our knowledge of the molecular basis of colorectal cancer will continue to result in markedly reducing the burden of this disease. Dr. Markowitz reports being listed on patents licensed to Exact Sciences and LabCorp and is entitled to receive royalties on sales of products related to methylated vimentin DNA, in accordance with the policies of Case Western Reserve University. No other potential conflict of interest relevant to this article was reported. - Sanford D. Markowitz. Molecular Basis of Colorectal Cancer. New England Journal of Medicine, 2009; 361: 2449-2460 [link] Cite This Page:
<urn:uuid:b0a46852-f49b-45e4-a238-32eeb5e8106d>
seed
Complete LCAT deficiency is a disorder that primarily affects the eyes and kidneys. In complete LCAT deficiency, the clear front surface of the eyes (the corneas) gradually becomes cloudy. The cloudiness, which generally first appears in early childhood, consists of small grayish dots of cholesterol (opacities) distributed across the corneas. Cholesterol is a waxy, fat-like substance that is produced in the body and obtained from foods that come from animals; it aids in many functions of the body but can become harmful in excessive amounts. As complete LCAT deficiency progresses, the corneal cloudiness worsens and can lead to severely impaired vision. People with complete LCAT deficiency often have kidney disease that begins in adolescence or early adulthood. The kidney problems get worse over time and may eventually lead to kidney failure. Individuals with this disorder also usually have a condition known as hemolytic anemia, in which red blood cells are broken down (undergo hemolysis) prematurely, resulting in a shortage of red blood cells (anemia). Anemia can cause pale skin, weakness, fatigue, and more serious complications. Other features of complete LCAT deficiency that occur in some affected individuals include enlargement of the liver (hepatomegaly), spleen (splenomegaly), or lymph nodes (lymphadenopathy) or an accumulation of fatty deposits on the artery walls (atherosclerosis). How common is complete LCAT deficiency? Complete LCAT deficiency is a rare disorder. Approximately 70 cases have been reported in the medical literature. What genes are related to complete LCAT deficiency? Complete LCAT deficiency is caused by mutations in the LCAT gene. This gene provides instructions for making an enzyme called lecithin-cholesterol acyltransferase (LCAT). The LCAT enzyme plays a role in removing cholesterol from the blood and tissues by helping it attach to molecules called lipoproteins, which carry it to the liver. Once in the liver, the cholesterol is redistributed to other tissues or removed from the body. The enzyme has two major functions, called alpha- and beta-LCAT activity. Alpha-LCAT activity helps attach cholesterol to a lipoprotein called high-density lipoprotein (HDL). Beta-LCAT activity helps attach cholesterol to other lipoproteins called very low-density lipoprotein (VLDL) and low-density lipoprotein (LDL). LCAT gene mutations that cause complete LCAT deficiency either prevent the production of LCAT or impair both alpha-LCAT and beta-LCAT activity, reducing the enzyme's ability to attach cholesterol to lipoproteins. Impairment of this mechanism for reducing cholesterol in the body leads to cholesterol deposits in the corneas, kidneys, and other tissues and organs. LCAT gene mutations that affect only alpha-LCAT activity cause a related disorder called fish-eye disease that affects only the corneas. This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents of an individual with an autosomal recessive condition each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition. Where can I find information about diagnosis or management of complete LCAT deficiency? These resources address the diagnosis or management of complete LCAT deficiency and may include treatment providers. The resources on this site should not be used as a substitute for professional medical care or advice. Users seeking information about a personal genetic disease, syndrome, or condition should consult with a qualified See How can I find a genetics professional in my area? in the Handbook.
<urn:uuid:42dc3d1d-1d67-463b-974a-5f14e3f7a935>
seed
Volume 13, Number 1—January 2007 Blood Transfusion and Spread of Variant Creutzfeldt-Jakob Disease Variant Creutzfeldt-Jakob disease (vCJD) may be transmissible by blood. To prevent secondary transmission through blood components, several countries have started to exclude as donors persons who have received a blood transfusion. We investigated the effectiveness of this measure by using a dynamic age-structured model. It is the first such model based on epidemiologic data: 1) blood donor activities, 2) a case-control study on CJD, 3) age distribution of recipients, and 4) death of recipients of blood transfusions. The model predicts that an infection like vCJD, which has been introduced into the population by the alimentary route, could not become endemic by transfusion alone and that <1% of cases would be avoided by excluding from blood donation those persons who have received a transfusion. Recent studies of variant Creutzfeldt-Jakob disease (vCJD) indicate that this disease is transmissible by blood. One case of probable transfusion-transmitted vCJD infection has been reported, and 1 case of subclinical infection has been detected (1,2). On February 9, 2006, a third case was announced by the UK Health Protection Agency (http://www.hpa.org.uk/hpa/news/articles/press_releases/2006/060209_cjd.htm). Each of the 3 patients had received a blood transfusion from a donor who subsequently developed clinical vCJD, which indicates that transfusion caused the infection. However, a policy to exclude potential donors who had received a transfusion would not have prevented at least the first 2 cases because the corresponding donors had not received any blood transfusion. Diagnostic tools to detect prions in blood are under development (3), but no routine test for the presence of the infectious agents of vCJD is available. Therefore, the questions arise as to whether an infection like vCJD could become endemic through blood donation alone and to what extent exclusion of potential donors with a history of transfusion would influence the transmission of such an infection (i.e., how many deaths due to the infection could be prevented?). The following mathematical model is the first to address these questions on the basis of epidemiologic data and realistic and epidemiologically justified assumptions. Figure 1A shows the transitions of a person through the basic states of potential donor activities and receipt of blood transfusion. After birth a person is in the state of not having received any transfusion and not yet being an active donor (S00). The first index refers to the person’s state as a transfusion recipient; the second index, to the person’s status as a donor. Persons in state S00 can change to state S01 by becoming a donor or to state S100 or S101 by receiving a blood transfusion. The third index indicates whether a person with a transfusion history can actually be identified and excluded from donating blood (deferred) (index 1) or not (index 0). The states S111 and S110 can be reached by either transfusion recipients who start donating blood or active donors who receive a blood transfusion. Blood donors who become inactive are transferred into the states of ex-donors S02 and S12, depending on their transfusion history. Ex-donors can also become transfusion recipients; i.e., they are transferred from S02 to S12. Donor exclusion transfers a certain proportion of transfusion recipients into the state of ex-donors. For all susceptible states, Figure 1B shows the transitions to the corresponding infected states. Table 1 provides a list of all input parameters together with descriptions and sources. The details of the model with all the numerical parameter estimates and the equations are given in the Technical Appendix . The computer program is available upon request. This article summarizes the major features of the model, the data sources, and the estimation of the model parameters. To simplify the model, we did not attempt to describe the demographics of the population during the next 150 years. Doing so would involve predicting changes in rates of birth, death, and immigration. It is assumed that in the absence of infection, the population is demographically stationary. We assumed a constant inflow of newborns and an age-specific death rate. The latter was estimated as a weighted mean of the age-specific female and male death rates. Because this study was initiated in Germany, we used the corresponding demographic data. To start the simulation in a demographically stationary state, the model was run for 100 years without infection. Thus, the age distribution of the population was identical to the life table of Germany 2002/2004 averaged over both sexes (http://www.destatis.de/download/d/bevoe/sterbet04.xls). Modeling Blood Donors Blood donors in Germany are >18 and <68 years of age. The rates for becoming a new donor and terminating the period as an active donor are age dependent. The corresponding parameters were estimated by using data from 262,071 donors registered with the German Red Cross (DRK) Blood Service West in Hagen, Germany, including age, sex, age at first donation, number of donations, and date of last donation. The age-specific prevalence of active donors peaks at ≈24 years of age and subsequently declines monotonically to zero by age 68. The overall prevalence in the population is 3%, i.e., 2.4 million donors in a population of ≈80 million. Modeling Transfusion Recipients The model takes into account that persons may receive >1 transfusion throughout their lifetime, but it does not track the number of transfusions received per person. Persons with >1 transfusion continue to be at risk for infection from further transfusions. The age-specific risk of receiving a transfusion was estimated from data for all patients hospitalized at the University Hospital in Essen during March 2003. Of 4,867 patients, 1,343 (27.6%) received >1 transfusion. The number of persons receiving a blood transfusion in each 5-year age group was divided by the corresponding number of persons in the general population. The observed rates were fitted with a simple model that assumes initially an exponential decline and subsequently a unimodal peak, which is proportional to the density function of the normal distribution. These age-specific ratios were properly scaled to balance the yearly number of transfusions per capita. To limit the complexity of the model, we did not take into account persons in subgroups, such as those with hemophilia, who obtain blood products from pools of donors. Because for medical reasons these subgroups are excluded from donating blood, they cannot contribute to persistence of the infection. Independence of Receiving and Donating Blood The events of receiving a blood transfusion and of donating blood are assumed to be independent of each other. This assumption is supported by the results of a case-control study of potential risk factors for CJD, which was coordinated by the Clinical Surveillance Centre for CJD, Department of Neurology in Göttingen, Germany (7). Table 2 shows the joint distribution for the control group of having received and donated blood. According to the Fisher exact test, the p value for the hypothesis of no association is 0.43. Heterogeneity in the risk of receiving a blood transfusion is modeled by the assumption that only a proportion of the population are at risk, whereas the remaining proportion never receives a transfusion. This assumption was introduced to be consistent with data from the case-control study, in which ≈18% of the population reported having ever received a blood transfusion. Without this assumption, the model would predict that eventually 100% of a cohort would receive a blood transfusion because the average annual risk of receiving a blood transfusion is about 5%, i.e., ≈4 million in a population of 80 million. Modeling Transfusion-associated Death Rates The transfusion-associated death rate has been described in detail by Wallis et al. (4). A good fit to the data assumes that at all ages a certain proportion of transfusion recipients have a higher rate of dying and the remaining proportion has a survival rate that corresponds to that of persons of the same age group in the general population. This age-dependent proportion of transfusion recipients with an increased risk for death is described by a generalized logistic function with a positive value at birth and an asymptote <100% for old age. The transfusion-associated death rate increases linearly with age. The increased death rate appears to be concentrated in the first 2 years after a transfusion. Wallis et al. report that 2,888 patients were observed as long as 7.4 years after transfusions received in June 1994 (4). The sex-specific rates were averaged for the simulation model. Modeling the Infection Usually the incubation period refers to the time between the infection and disease. In the context of CJD, however, disease can refer to onset, diagnosis, or death. Like Bacchetti, we also focused on death rates (8–10). The incubation period is assumed to be γ distributed with a mean duration of 16 years and a standard deviation of 4 years, which conforms to estimates of Valleron et al. and Ghani et al. (5,6). Because of great uncertainty about the length of the incubation time, we also considered a much higher value of 50 years in the absence of the competing risk for death. The coefficient of variation is assumed to be the same, such that the standard deviation is 12.5 years. Because of competing risks, the actual sojourn in the incubation period is 15.3 for an incubation period of 16 years and 34.0 years for an incubation period of 50 years. The proportions of infected persons who would die with disease symptoms are 79% and 37% for the incubation periods of 16 and 50 years, respectively. This means that for an incubation time of 50 years, nearly two thirds would die without disease symptoms. Hereafter we refer to these values of 15 and 50 years as short and long incubation periods. We distinguish between 2 modes of transmission. Initially, the infection is introduced into the population by the alimentary route. In the United Kingdom the number of infected animals entering the food supply peaked in 1989; most were concentrated within a period of 10 years (11), which we take as the assumed period of alimentary infection. After this period, this mode of transmission was interrupted so that further transmissions are possible only through blood transfusions. A study to detect the presence of abnormal prion protein in appendix and tonsil tissues has suggested a prevalence of 235 infections per million in the United Kingdom (12). We arbitrarily assumed the prevalence of infections in Germany to be ≈1 order of magnitude lower, yielding a cumulative incidence of 25 per million, which was the value used for the simulations. We made 2 contrasting assumptions about the infectivity of blood preparations and evaluated the results of these 2 simulations: each transfusion (100% infectivity) or no blood transfusion (0% infectivity) from an infected donor leads to infection of the recipient. In the model the infection probability (probability of receiving blood from an infected donor) is proportional to the proportion of infected donors among all donors. Thus, we can calculate the number of infections from blood transfusions compared with the number of infections from alimentary transmission alone. Modeling Donor Exclusion The model distinguishes between persons with and without transfusion history, termed recipients and nonrecipients; these terms are applied to persons whether they have or have not donated blood. The model allows recipients to be excluded from donating blood. In modeling the exclusion of recipients, we took into account that this measure may be imperfect and that a certain proportion of recipients may not be excluded. For the parameter estimates obtained from the sources described above, the infection cannot become endemic (Figure 2). If we assume no further spread through blood transfusions after 10 years of infections by the alimentary route, the maximum prevalence reached is ≈1,860 (1,434 for nonrecipients plus 426 for recipients) because some of the infected persons die of other causes during the incubation period. If transmission is assumed to be possible through blood transfusions (100% infectivity), then the maximum prevalence among recipients is increased by ≈78 infections after 4 more years for the short incubation period and by 193 infections after 23 more years for the long incubation period. We assumed that donor exclusion is implemented immediately at the beginning of the alimentary infection risk period, which reduced the original number of 2.55 million donors by ≈20% to a value of 2.05 million donors. Because the model does not account for the stock of blood donations, this reduction in the number of donors must be compensated for with an increased rate of donations per donor to satisfy the demand; i.e., the average number of donations would have to increase from 1.6 to 2 per donor per year. Figure 2A shows that donor exclusion has almost no effect when the incubation period is assumed to be 16 years. The absolute prevalence (i.e., the actual number of infected persons) differs at most by 9. For a long incubation, differences are visible (59 persons at most) but small in view of the long time intervals and the size of the total population (Figure 2B). The reason for these small differences is described below. The cumulative numbers of deaths from the infection are given in Table 3. The numbers are considerably smaller for the long than for the short incubation period because a long incubation period implies more deaths from other causes. The numbers are given separately for cases in patients with and without a history of blood transfusion. The route of infection for nonrecipients is alimentary only, whereas the route of infection for recipients is unclear. If we compare the simulations at 100% and 0% infectivity of blood transfusions, we observe 172 and 224 additional cases for the short and the long incubation periods, respectively. These numbers represent 11% of 1,557 and 31% of 725 cases, which would be expected for 0% infectivity for the short and long incubations periods, respectively. For the short incubation period we expect a higher absolute number of alimentary cases but a smaller proportion of transfusion cases than for the long incubation period. The exclusion of donors would prevent only 15 and 50 cases, i.e., ≈15 (0.9%) of 1,729 and 50 (5%) of 949, respectively, at the end of the epidemic. The epidemic lasts for ≈50 or ≈150 years for the short and the long incubation periods, respectively. The predicted yearly incidence of deaths due to vCJD, separated by transfusion history, is shown in Figure 3. The yearly peak incidence of total deaths would be 128 and 29 for the short and the long incubation periods at 23 and 51 years after the beginning of the epidemic, respectively. For 0% infectivity the peak incidence would be only 5 and 3 cases less for the short and long incubation periods, respectively, which implies that the exclusion of donors with a transfusion history does not effectively prevent infection. Figure 4 shows the predicted yearly incidence of deaths according to the route of infection. The time lags between the peaks of deaths due to alimentary infection and due to transfusion clearly differ and are 9 and 20 years for short and long incubation periods, respectively. Finally, we considered the absolute prevalence of infected donors according to their history of blood transfusion (Figure 5). Most infected donors do not have a transfusion history, which explains the negligible effect of a policy excluding transfusion recipients from donation. To determine whether the same model could also predict transition into a positive endemic equilibrium of the infection, we made the unrealistic assumptions that the rates of donor recruitment and donor loss are constant between the ages of 18 and 67 and that the rate of receiving a blood transfusion is constant throughout life. Then the model showed an extremely long time (>2,000 years) before positive equilibrium would be reached (results not shown). Our model is the first attempt to describe in a realistic way the transmission of infections through blood transfusions. In 1994, Velasco-Hernández proposed a model for the spread of Chagas disease by vectors and blood transfusion (13). His model was used by Roberts and Heesterbeek to introduce their new concept to estimate the effort to eradicate an infectious disease (14). Huang and Villasana included transmission through blood transfusion in an AIDS model (15). All these models have in common what Inaba and Sekine state about their extension of Velasco-Hernández’s Chagas model: “…here we assume that blood donors are randomly chosen from the total population, and so there is no screening and the recipients of blood donations are donating blood themselves at the same rate as anybody else. This is an unrealistic assumption, but we will use it.” (16). These models implicitly describe transmission through blood transfusion exactly like person-to-person transmission by droplet infections. The key innovation in our model is the simultaneous incorporation of 6 functions that all depend explicitly on the age of a person: 1) natural death rate, 2) rate of receiving a blood transfusion, 3) rates of donor recruitment, 4) donor loss, 5) death rate associated with transfusions, and 6) proportion of transfusion recipients at increased risk for death. The age-dependent effects of these processes cannot be ignored. Peak ages of donor activity (≈22 years) and of receiving a blood transfusion (≈70 years) are quite distinct and ≈50 years apart. This age pattern does not favor the spread of infection by blood transfusion. Another factor that acts against the infection becoming endemic is the transfusion-associated death rate. The good quality of the follow-up data of nearly 3,000 patients helped to incorporate realistic assumptions about the survival probabilities of transfusion recipients (4). The only data available about the joint distribution of blood donor activity and history of a blood transfusion was the CJD case-control study performed in Göttingen, Germany (7). The length of the incubation period plays a major role in transmission dynamics and hence was subject to a sensitivity analysis. The model does not account for possible changes of infectivity during the incubation period. The model represents a worst-case scenario because it assumes 100% infectivity throughout the period of infection. Even under this extreme assumption, donor exclusion can prevent only 0.9% (or 5%) of the expected deaths, assuming the incubation period has a mean duration of 16 (or 50) years. The main explanation for this surprising result is that most infected donors have been infected by the alimentary route and never received any blood transfusion and, therefore, are not eligible for donor exclusion. The present simulations have arbitrarily assumed a cumulative incidence of alimentary infection, about 25 per million (2,000 per 80 million). With pessimistic assumptions, the model predicts either 19.5 deaths per million for the short incubation period or 9 deaths per million for the long incubation period in the absence of spread through blood transfusion. This corresponds to at least 9 (36%) of 25 deaths attributable to the infection, which is ≈2 orders of magnitude higher than expected for vCJD in the United Kingdom. As of July 2006, the number of vCJD cases in the United Kingdom was 160. If we assume that the total number of cases will be 200, then our assumption corresponds to about 3.3 cases per million. Thus, at most, 1.4% of infected persons would die from the infection (unless a second wave of vCJD cases with a long incubation period occurs). According to our model, 0.9% of the deaths could be prevented by donor exclusion under the assumption of the short incubation period. In absolute numbers this would be ≈2 cases. In France, the total number of vCJD cases recorded through July 2006 is 18. Even under the assumption that this number represents only 35% of the total number of cases (17), the absolute expected number of prevented cases would be <1. In 1998, France decided to exclude donors with a transfusion history, primarily to reduce the spread of viruses. The present model could be modified to assess the effectiveness of excluding donors with transfusion history for preventing emerging infections with different modes of transmission and additional epidemiologic states, e.g., latent or immune. Our worst-case scenario assumptions of the epidemiology might seem similar to the situation in the United Kingdom. In Germany, no case of vCJD has been reported, which indicates that the expected number of cases in Germany is at least 2 orders of magnitude less than that in the United Kingdom. This latter aspect was considered in the interpretation of our model by a working group commissioned by the German Federal Minister of Health, which recommended in April 2006 that persons with a transfusion history not be excluded from donating blood (18). Our analysis enables different countries to perform their own risk assessment and choose a strategy according to the absolute number of cases observed or expected. Dr Dietz has been head of the Department of Medical Biometry at the University of Tübingen, Germany, since 1976. During 1969–1976, he was statistician at the World Health Organization in Geneva, Switzerland. His main interest is the application of mathematical models in the field of infectious diseases, in particular malaria and other parasitic diseases. The German CJD Surveillance study was supported by a grant from the German Ministry of Health (Az 325-4471-02/15 to Inga Zerr and H. A. Kretzschmar). Helpful discussions about previous versions of the model took place with the Working Group Overall Blood Supply Strategy with regard to vCJD, Germany (Chairman R. Seitz). - Llewelyn CA, Hewitt PE, Knight RS, Amar K, Cousens S, Mackenzie J, Possible transmission of variant Creutzfeldt-Jakob disease by blood transfusion.Lancet. 2004;363:417–21. - Peden AH, Head MW, Ritchie DL, Bell JE, Ironside JW. Preclinical vCJD after blood transfusion in a PRNP codon 129 heterozygous patient.Lancet. 2004;364:527–9. - Castilla J, Saa P, Soto C. Detection of prions in blood.Nat Med. 2005;11:982–5. - Wallis JP, Wells AW, Matthews JN, Chapman CE. Long-term survival after blood transfusion: a population based study in the North of England.Transfusion. 2004;44:1025–32. - Valleron AJ, Boelle PY, Will R, Cesbron JY. Estimation of epidemic size and incubation time based on age characteristics of vCJD in the United Kingdom.Science. 2001;294:1726–8. - Ghani AC, Donnelly CA, Ferguson NM, Anderson RM. Updated projections of future vCJD deaths in the UK.BMC Infect Dis. 2003;3:4. - Zerr I, Brandel JP, Masullo C, Wientjens D, de Silva R, Zeidler M, European surveillance on Creutzfeldt-Jakob disease: a case-control study for medical risk factors.J Clin Epidemiol. 2000;53:747–54. - Bacchetti P. Unexamined assumptions in explorations of upper limit for cases of variant Creutzfeldt-Jakob disease.Lancet. 2001;357:3–4. - Bacchetti P. Age and variant Creutzfeldt-Jakob disease.Emerg Infect Dis. 2003;9:1611–2. - Bacchetti P. Uncertainty due to model choice in variant Creutzfeldt-Jakob disease projections.Stat Med. 2005;24:83–93. - Collins SJ, Lawson VA, Masters CL. Transmissible spongiform encephalopathies.Lancet. 2004;363:51–61. - Clarke P, Ghani AC. Projections of the future course of the primary vCJD epidemic in the UK: inclusion of subclinical infection and the possibility of wider genetic susceptibility.J R Soc Interface. 2004;2:19–31. - Velasco-Hernandez JX. A model for Chagas disease involving transmission by vectors and blood transfusion.Theor Popul Biol. 1994;46:1–31. - Roberts MG, Heesterbeek JA. A new method for estimating the effort required to control an infectious disease.Proc R Soc Lond B Biol Sci. 2003;270:1359–64. - Huang XC, Villasana M. An extension of the Kermack-McKendrick model for AIDS epidemic.Journal of the Franklin Institute–Engineering and Applied Mathematics.2005;342:341–51. - Inaba H, Sekine H. A mathematical model for Chagas disease with infection-age-dependent infectivity.Math Biosci. 2004;190:39–69. - Chadeau-Hyam M, Alperovitch A. Risk of variant Creutzfeldt-Jakob disease in France.Int J Epidemiol. 2005;34:46–52. - German Federal Ministry of Health Working Group. Overall blood supply strategy with regard to variant Cruetzfeldt-Jakob disease (vCJD).Transfus Med Hemother. 2006;33(suppl 2):vii–39. Suggested citation for this article: Dietz K, Raddatz G, Wallis J, Müller N, Zerr I, Duerr H-P, et al. Blood transfusion and spread of variant Creutzfeldt-Jakob disease. Emerg Infect Dis [serial on the Internet]. 2007 Jan [date cited]. Available from http://wwwnc.cdc.gov/eid/article/13/1/06-0396.htm Comments to the Authors West Nile Virus RNA in Tissues from Donor Transmission to Organ
<urn:uuid:e4b05b6f-2316-43f0-a375-f67906c7662b>
seed
The routine test that most glaucoma patients dislike the most (and find the most difficult) is perimetry or visual field testing. It can be difficult because of the concentration needed to respond every time a light is seen, no matter how dim or bright and, especially after a long day at the hospital sitting in waiting rooms and walking corridors the results can be very variable, which is bad news for those who have to undergo the test and bad news for their doctors because the results can be unreliable or contradictory. Nevertheless, perimetry is the only test we have that can measure a glaucoma patient's functional ability to see and as such it is a vital element in the battery of tests used to monitor glaucoma and to decide upon any necessary changes that may be required in an individual's management plan in order to protect their vision. How much better would it be if the same functional information could be obtained without the patient having to do anything – nothing to concentrate on, no buttons to click, no decisions to make? An instrument that can measure the visually evoked potential (or the visually evoked response) automatically already exists, in fact various instruments have been under development for several years. They work by measuring the tiny changes in the electrical activity of the brain when the eye 'sees' something, in this case a light from a perimeter type setup. Researchers at the New York Eye and Ear Infirmary have just published a paper in the Journal of Glaucoma called 'Short Duration Transient Visual Evoked Potentials in Glaucomatous Eyes' that shows that a visual evoked potential instrument can measure a patient's ability to see accurately and objectively and not only that, but that the measurements correlate well with the structural changes in the retina and the optic nerve head that can now be measured using optical coherence technology (OCT). The test itself takes between four and six minutes per eye. Dr Robert Ritch, MD, one of the study authors is quoted as saying, 'The SD-tVEP results correlated significantly with the severity of visual field damage, but the VEP results were obtained objectively, which helps give eye care specialists more confidence in the findings.' It remains to be seen when and if this technology will become commonplace in the glaucoma management regimes used in different countries around the world, but this would seem to have the potential to make life a little easier for glaucoma patients and to provide their doctors with more robust information about precisely how well they can see regardless of an individual's level of fatigue or ability to concentrate. David J Wright FIAM International Glaucoma Association
<urn:uuid:bdad36cc-cafe-445e-afab-9eabe3605e96>
seed
Volume 9, Number 12—December 2003 Generalized Vaccinia 2 Days after Smallpox Revaccination To the Editor: Hospital and public health personnel are currently receiving smallpox vaccination in a national effort to increase preparedness for a possible deliberate release of smallpox (1). Generalized vaccinia (GV) is a typically self-limited adverse event following vaccination (incidence 23.4–238.2 cases per million primary vaccinees and 1.2–10.8 cases per million revaccinees) (2,3). We report the clinical course and laboratory diagnosis of GV in a 37-year-old woman with a history of at least one uncomplicated childhood inoculation that left a vaccination scar. She was revaccinated on March 12, 2003. Before revaccination, the patient reported no contraindications to vaccination and denied any conditions that typically weaken the immune system (including HIV/AIDS, leukemia, lymphoma, other cancers, radiation, chemotherapy, organ transplant, posttransplant therapy, immunosuppressive medications, severe autoimmune disease, and primary immune deficiency). The patient also confirmed that she did not have a skin disease or a history of eczema or atopic dermatitis, nor was she pregnant or allergic to a vaccine component. On March 14, some 44 hours after vaccination, the patient reported headache, chills, pruritus, chest pain (described as chest “heaviness”), recurrent vomiting, and maculopapular lesions. The lesions, characterized by the patient as “mosquito bites,” first appeared on the face, then the legs, and then the trunk and upper extremities. Maximum oral temperature was 37.7°C. Over the next 4 days, approximately 30 pustules developed, several of which began to drain. Nausea persisted, and the patient had a stiff neck and recurring chest tightness, but physical examination, echocardiography, electrocardiography, and chest radiography were within normal limits. By March 25, the patient’s lesions had all scabbed, the scabs had fallen off, and she felt well enough to return to work. Pustular material obtained on March 18 from two unroofed lesions on the shoulder (Figure) and back tested positive at the Wadsworth Center-Axelrod Institute, New York State Department of Health, for vaccinia virus DNA by a TaqMan (Applied Biosystems, Foster City, CA) real-time polymerase chain reaction assay provided by the Laboratory Response Network, Centers for Disease Control and Prevention. The presence of orthopoxvirus was confirmed by electron microscopy of lesion fluid. This case is the first report of a laboratory-confirmed case of GV among recent civilian vaccinees and is notable for the GV occurrence in a revaccinee. GV was not reported among 132,656 military personnel recently revaccinated (4). A single case of GV in a revaccinee among 38,514 recent civilian vaccinations (5) yields a ratio that exceeds the rate in revaccinees observed in earlier reports and the difference would be even greater if civilians who received primary vaccinations were excluded. This laboratory confirmation of GV demonstrates the potential of laboratory testing to determine the cause of a post-vaccination rash. Possible cases of GV in earlier surveillance efforts represented a mixed group of rashes, some of uncertain etiology (6). This patient’s clinical course is notable for the onset of GV 2 days after vaccination, as compared to a mean of 9 days (range 1–20+) after (generally primary) vaccination (2) and suggests that viremia can occur quickly after vaccination. We thank the patient, as well as our colleagues Peter Drabkin, Christina Egan, Cassandra Kelly, Debra Blog, Stephen Davis, William Samsonoff, Kimberly A Musser, and Jill Taylor. - Wharton M, Strikas RA, Harpaz R, Rotz LD, Schwartz B, Casey CG, Recommendations for using smallpox vaccine in a pre-event vaccination program. Supplemental recommendations of the Advisory Committee on Immunization Practices (ACIP) and the Healthcare Infection Control Practices Advisory Committee (HICPAC). MMWR Recomm Rep. 2003;52:1–16. - Lane JM, Ruben FL, Neff JM, Millar JD. Complications of smallpox vaccination, 1968. N Engl J Med. 1969;281:1201–8. - Neff JM, Levine RH, Lane JM, Ager EA, Moore H, Rosenstein BJ, Complications of smallpox vaccination United States 1963. II. Results obtained by four statewide surveys. Pediatrics. 1967;39:916–23. - Grabenstein JD, Winkenwerder W. US military smallpox vaccination program Experience. JAMA. 2003;289:3278–82. - Centers for Disease Control and Prevention. Smallpox Vaccination Program Status by State [cited October 9, 2003]. Available from: URL: http://www.cdc.gov/od/oc/media/spvaccin.htm - Neff JM, Lane JM, Pert JH, Moore R, Millar JD, Henderson DA. Complications of smallpox vaccination. I. National survey in the United States, 1963. N Engl J Med. 1967;276:125–32. Suggested citation for this article: Miller JR, Cirino NM, Philbin EF. Generalized vaccinia 2 days after smallpox revaccination. Emerg Infect Dis [serial online] 2003 Dec [date cited]. Available from: URL: http://wwwnc.cdc.gov/eid/article/9/12/03-0592 - Page created: February 10, 2011 - Page last updated: February 10, 2011 - Page last reviewed: February 10, 2011 - Centers for Disease Control and Prevention, National Center for Emerging and Zoonotic Infectious Diseases (NCEZID) Office of the Director (OD)
<urn:uuid:deb8d812-64c3-4a39-b524-50c9c046d8aa>
seed
Selective Internal Radiation Therapy [SIRT]: SIR-Spheres® A/Prof Lourens Bester Dr James Burnes Date last modified: May 01, 2009 1. What is SIRT? Selective Internal Radiation Therapy (SIRT) is a treatment for liver cancers or tumours that delivers millions of tiny radioactive microspheres or beads called SIR-Spheres® directly to the liver tumours. SIR-Spheres® are about one third the diameter of a strand of hair in size and they release a type of radiation energy called ‘Beta’ radiation. Beta radiation is a common type of radiation used in other nuclear medicine therapy and diagnostic procedures. SIR-Spheres® are approved for the treatment of liver tumours that cannot be removed by surgery. These may be tumours that start in the liver (also known as primary liver cancer), or they may be tumours that have spread to the liver from another part of the body (also known as secondary liver cancer or metastases). To perform SIRT, a small puncture or incision is made in the groin and a small thin tube called a catheter is placed in the artery and guided into the liver using X-ray pictures or images. SIR-Spheres® are delivered through the catheter and are then carried by the bloodstream directly to the tumours in the liver where they only lodge in the small vessels feeding the tumour. The majority of SIR-Spheres® are lodged in the outside edge of the tumour/s and the radiation has a direct destructive effect on the tumour itself and the vessels feeding the tumour. Destroying the vessels feeding the tumour means that the tumour/s can no longer be supplied with the nutrients in the bloodstream. Most patients after SIRT will see a reduction or stabilisation of their liver tumours. For most patients, treatment will result in increased survival time, but not a permanent cure. 2. How do I prepare for SIRT? Your treatment team will want to know about your previous cancer history and any other medical conditions you may have. They will then conduct a number of initial tests to ensure that it is possible for you to receive SIRT safely. You will normally have two procedures where you will be conscious (awake) but you may have some sedation to make you drowsy so that you feel comfortable. The treatment requires an overnight stay in hospital. Your treating doctor will advise what arrangements need to be made for hospital admission. You may need to check with the hospital what you need to bring with you for admission. 3. What happens during SIRT? SIRT normally comprises two procedures: Preparation or “work-up” The first procedure for SIRT is the preparation phase for the treatment commonly known as the work-up that includes a radiology procedure known as an angiogram (see Angiography). The purpose of the angiogram or mapping is to prepare your liver for SIRT. In preparation for the angiogram you will have blood tests to evaluate your kidney function and blood clotting. During the mapping procedure your interventional radiologist (a specialist doctor) may block (embolise) some of the liver blood vessels communicating with other blood vessels to minimise the potential for the SIR-Spheres® to travel to areas outside your liver (e.g. the stomach or intestine). You will also receive a small amount of radioactive spheres (MAA) similar in size to SIR-Spheres® to check the amount of blood that flows from the liver to the lungs. This is also a trial run to see how the SIR-Spheres® will behave when injected into your body. During the angiogram a small amount of dye (or contrast medium) is injected through a catheter (a thin plastic tube) inserted into an artery. The dye travels down the catheter into the liver and highlights the vessels. Images or pictures are taken throughout the procedure. The interventional radiologist now has a map of your liver vessels to follow so that the catheter can be advanced closer to the site of the tumours in the liver. Most patients say they feel a little warm when the dye is injected. Throughout the whole procedure you should try to stay as still as possible to avoid moving or dislodging the catheter in place. This work-up angiogram is done in a conscious state (awake) and a local anaesthetic is given so that the discomfort from the procedure is minimal around the puncture wound. The work-up procedure for SIRT is normally done on an outpatient basis. You will be observed after the work-up procedure and may then return home. While you are being observed your doctor will review the X-ray images to determine your suitability for SIRT and to see if you are suitable to proceed with the SIR-Spheres® implant. Implant of SIR-Spheres® You will need to return to the hospital within 7-10 days of the work-up when a second angiogram is performed to implant the SIR-Spheres® (SIRT). It is identical to the work-up angiogram except that SIR-Spheres® are inserted. For the procedure, you are admitted to hospital and then taken on a bed to the angiography suite. Once inside the angiography suite an interventional radiologist (a specialist doctor) will perform the second angiogram. You do need to fast before the angiogram. The purpose of the angiogram this time is to implant the SIR-Spheres®. The catheter used during the angiogram is then guided by the interventional radiologist through the artery and placed close to the tumours in the liver. While the implantation angiogram is taking place, the nuclear medicine department prepares an individually prescribed dose of radiation for you (SIR-Spheres®). The prescribed dose of SIR-Spheres® is put into a specialised perspex box which is transported from the nuclear medicine department to the angiography suite where your catheter is being inserted. The perspex box is then brought to the side of the bed and the catheter inserted into your artery is connected to the perspex box. Once connected the system is then ready for the infusion of SIR-Spheres®. SIR-Spheres® are then infused from the perspex box through the catheter into your liver. During this infusion the radiologist may also insert contrast medium into the catheter to ensure that the catheter has not moved during the procedure. This whole procedure may take about 60 minutes. Once the infusion is complete, the catheter is then removed from the liver and the box used to deliver the SIR-Spheres® microspheres is then taken back to the nuclear medicine department. Once the catheter has been removed, the interventional radiologist will compress the puncture wound where the catheter was inserted for around 10 minutes. This compression is done to stop excess bleeding at the site of the puncture. You then stay near the angiography suite for about 3 hours for observation to ensure there are no problems following the procedure. After observation you are taken to a general ward for an overnight stay. In rare circumstances, on the advice of the treating doctor, you may be required to stay more than one night in hospital. Most patients are discharged the day following the procedure. You may experience pain and nausea during the implantation process. The interventional radiologist and the angiography team will make sure that you receive the necessary medications to make you comfortable. SIRT is usually done as a single treatment but some patients may be re-treated with SIRT. Re-treatment may occur in rare circumstances and may be indicated where new tumours grow in the liver despite SIRT, or previously treated tumours start to enlarge. 4. Are there any after effects of SIRT? You should not have any serious after effects when SIR-Spheres® are correctly administered. You may experience some of the following side-effects or you may not experience any side effects at all: - Pain in the abdomen that may last for a few hours: This can be well controlled with pain medication. - Nausea may be caused by the angiography contrast medium that is injected into the vessels or as a result of the SIR-Spheres® infusion into the liver: This is a short term effect (several days) which can be well controlled with anti nausea medication. - Reduced appetite: Some patients may feel a loss of appetite for several days. - Tiredness: This may be caused by the effect of the radiation on the liver tumours and may last several days. - Fever: The destruction of the liver tumours and the by products of this destruction may cause a short term fever (up to a week). This can be well controlled with paracetamol or a similar over the counter analgesic. - Radiation in the body: Your treating doctor will advise you on the effects of radiation and will advise that contact with other people should be minimised for at least the first week after treatment. This means that prolonged, close physical contact should be avoided, such as, sitting/sleeping next to children or pregnant women. Please feel free to discuss this with your radiologist. 5. How long does SIRT take? As previously described, SIRT involves two procedures. The work-up procedure for SIRT may take about 90 minutes, and is normally done on an outpatient basis. You will be observed for three hours after the work-up procedure and may then return home. While you are being observed, your doctor will review your X-ray images to determine your suitability for SIRT and to see if you are suitable to proceed with SIRT. You need to return to the hospital within 7-10 days for the SIR-Spheres® implant angiogram. The second part of the procedure, the SIR-Spheres® implant may take about 60 minutes. Your groin puncture wound will be compressed for approximately ten minutes. The puncture wound is compressed to stop bleeding at the wound site. You will be observed for three hours and then taken to a general ward for an overnight stay at the hospital. The time taken for the two SIRT procedures is significantly less than may be encountered with regular weekly or bi-weekly chemotherapy treatments. 6. What are the risks of SIRT? There are possible risks with SIRT. Your treating interventional radiologist is a highly trained specialist doctor who is experienced with minimising the risks for this procedure. The risks of the procedure are: - Inadvertent delivery of SIR-Spheres® to the stomach or pancreas may cause abdominal pain and nausea, acute pancreatitis or peptic ulceration (stomach ulcer). - High levels of implanted radiation and/or excessive shunting of SIR-Spheres® to the lung may lead to radiation pneumonitis (too much radiation to the lungs). Shunting to the lungs may occur in rare circumstances with liver cancer patients. Shunting to the lungs is caused by an excessive pressure build up in the blood in the liver. This pressure build up is caused by the increased amount of blood that is flowing from your liver arteries to supply the liver tumours. Occasionally this pressure becomes so great that this blood is ‘shunted’ or moved from the liver to the lungs. This may result in a dry cough in the lungs. All patients that have SIR-Spheres® will have their suitability for treatment assessed in the work-up phase. Patients deemed to be a risk for radiation pneumonitis would not be recommended to undergo SIRT. - Excessive radiation to the normal liver may result in radiation hepatitis (too much radiation to the liver). Patients who are considered at risk of this would not be recommended to undergo SIRT due to the poor liver function of the patient as a result of their liver tumours. - Inadvertent delivery of SIR-Spheres® to the gall bladder may result in inflammation of the gall bladder. You should discuss these risks with your treating radiologist so that you can consider the benefits of the treatments with the risks before you both make a decision. 7. What are the benefits of SIRT? The benefits of SIRT have been demonstrated in the following areas: - Combining SIRT to standard chemotherapies provides greater survival benefit than just using chemotherapy alone. - Survival benefit has been demonstrated in patients whose cancer has not responded to all forms of chemotherapy and then received SIRT as a sole treatment. - Reducing the sizes of tumours. - Reducing tumour sizes to the point that liver surgeons can remove the tumour from the liver. - Improving quality of life for the patient. - Allowing some patients to have a liver transplant. 8. Who does SIRT? SIRT is performed by a number of different doctors and hospital departments working closely together. A patient who is being considered for SIRT is firstly carefully assessed by the referring doctor for suitability. A referring doctor is usually a specialist and can be a medical oncologist, surgeon, gastroenterologist or other specialist doctor . If your referring doctor thinks you are suitable for SIRT they will send a fax or letter to the interventional radiologist for you to be assessed for suitability to undertake SIRT. An interventional radiologist (a specialist X-ray doctor who uses X-ray equipment to perform operations on patients) performs the SIRT procedure. The interventionalist radiologist is responsible for performing the work-up procedure and assessment of patient suitability to undertake SIRT. It is important to be aware that the majority of patients will be suitable for SIRT. The interventionalist radiologist works closely with a nuclear medicine doctor to review scans following the work-up procedure to determine if you might be suitable to undertake SIRT. The nuclear medicine doctor is also closely involved with the procedure, and in conjunction with the interventional radiologists both doctors are responsible for two areas. These areas include the procedural (treatment related) as well as diagnostic and consulting (i.e. interpreting scans and advising other doctors and the patient on the treatment approach and results of treatment). The procedural aspect involves all work done in the angiography suite and the SIR-Spheres® dose preparation done in the nuclear medicine department. The angiography suite is a specially equipped room in the radiology department where all interventional radiology procedures are performed. The interventional radiologist will perform the patient catheterisation and SIR-Spheres® dose infusion and will take necessary CT scans. 9. Where is SIRT done? SIRT is performed within the angiography suite in the radiology department within the hospital. The angiography suite is a special room where scans of the patient can be performed and specialised procedures called ‘interventional procedures’ are performed by the interventional radiologist. Interventional procedures are a sub specialised area of radiology where a catheter is inserted into the artery or vein of the patient. 10. When can I expect the results of my SIRT? The aim of SIRT is to reduce the size of tumours in the liver so as to be able to prolong the life of the patient, while at the same time maintaining or improving quality of life. After SIRT your doctor may assess the results of your treatment in a number of ways including the following: Computed Tomography (CT) scan A CT Scan (sometimes know as a CAT Scan) is a radiology image of the liver where the tumours can be seen. Prior to your SIRT you will have a CT scan. This initial scan is done before your SIRT and is sometimes known as a “baseline” scan and is used to compare with scans taken after SIRT. This follow up scan can then be used by your doctor to see if your tumours have reduced in size since the treatment. The follow up CT scan is normally done from 4 weeks up to 3 months after your treatment. Positron Emission Tomography (PET) scan A PET scan is another type of scan that is sometimes used by your doctor to determine your response to SIRT. If your doctor determines that you require this scan you will have a scan before your treatment and another scan following your SIRT. Your doctor may wish to follow up your response to SIRT by looking at certain tumour breakdown products also called “markers” in your blood. A tumour marker is a substance found in the blood which may be elevated or higher in cancer. As SIRT acts to destroy the liver tumours, your doctor may take a blood sample before your SIRT and another blood sample after your treatment to evaluate whether there has been a reduction in the marker over time. Your doctor will monitor one or more of these markers. A common marker for colorectal cancer is called CEA (Carcinoembryonic Antigen) 11. Useful websites about SIRT: - Information about SIR-Spheres® microspheres can be found at the manufacturers website:
<urn:uuid:ad5ae197-820a-41a0-ad65-2a930847d3dd>
seed
Later diagnosis, less first-course treatment and race are the main reasons for the difference in mortality between rich and poor breast cancer patients. A new study, published in the open access journal BMC Cancer, suggests that targeted interventions to increase breast cancer screening and treatment coverage in worse-off patients could reduce much of the socioeconomic disparity in survival. Xue Qin Yu, who conducted the analysis while employed with the American Cancer Society, studied the records of more than 112,500 women diagnosed with breast cancer in the United States between 1998 and 2002. These women were followed up until the end of 2005 and, as expected, socio-economic status (SES) was significantly associated with the likelihood of surviving. Yu said, "Women living in the lowest SES areas had the lowest percentage of early stage cancer, and the highest percentage of advanced stages, at the time of diagnosis. The proportion of black women living in the lowest SES areas was nearly four times higher than that of the highest SES areas. Furthermore, women in the lowest SES areas were significantly less likely to receive first course treatment". Yu suggests that the unfavourable stage distribution for women from the lowest SES areas was likely caused by lower mammography rates. Lack of health insurance and lower financial resources are known to be associated with lower mammography rates and lack of, or delayed, follow-up after an abnormal mammogram. Race may be associated with breast cancer survival independent of other factors, but this study has limited ability to separate out these multiple dimensions. Finally, the observed poorer survival in non-metropolitan areas may be due to factors related to access to and time waiting for chemotherapy and/or radiotherapy after breast cancer surgery. Notes to Editors: 1. Socioeconomic disparities in breast cancer survival: relation to stage at diagnosis, treatment and race Xue Qin Yu BMC Cancer (in press) During embargo, article available here: http://www.biomedcentral.com/imedia/1262456858271531_article.pdf?random=878021 After the embargo, article available at journal website: http://www.biomedcentral.com/bmccancer/ Please name the journal in any story you write. If you are writing for the web, please link to the article. All articles are available free of charge, according to BioMed Central's open access policy. Article citation and URL available on request at [email protected] on the day of publication 2. BMC Cancer is an open access journal publishing original peer-reviewed research articles in all aspects of cancer research, including the pathophysiology, prevention, diagnosis and treatment of cancers. The journal welcomes submissions concerning molecular and cellular biology, genetics, epidemiology, and clinical trials. BMC Cancer (ISSN 1471-2407) is indexed/tracked/covered by PubMed, MEDLINE, CAS, Scopus, EMBASE, Current Contents, Thomson Reuters (ISI) and Google Scholar 3. BioMed Central (http://www.biomedcentral.com/) is an STM (Science, Technology and Medicine) publisher which has pioneered the open access publishing model. All peer-reviewed research articles published by BioMed Central are made immediately and freely accessible online, and are licensed to allow redistribution and reuse. BioMed Central is part of Springer Science+Business Media, a leading global publisher in the STM sector. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:e2d336c6-3b2b-4b45-a8d4-c8c3649fa485>
seed
Age-Related Cognitive Decline About This Condition A decline in memory and cognitive (thinking) function is considered by many authorities to be a normal consequence of aging.1, 2 While age-related cognitive decline (ARCD) is therefore not considered a disease, authorities differ on whether ARCD is in part related to Alzheimer’s disease and other forms of dementia3 or whether it is a distinct entity.4, 5 People with ARCD experience deterioration in memory and learning, attention and concentration, thinking, use of language, and other mental functions.6, 7 ARCD usually occurs gradually. Sudden cognitive decline is not a part of normal aging. When people develop an illness such as Alzheimer’s disease, mental deterioration usually happens quickly. In contrast, cognitive performance in elderly adults normally remains stable over many years, with only slight declines in short-term memory and reaction times.8 People sometimes believe they are having memory problems when there are no actual decreases in memory performance.9 Therefore, assessment of cognitive function requires specialized professional evaluation. Psychologists and psychiatrists employ sophisticated cognitive testing methods to detect and accurately measure the severity of cognitive decline.10, 11, 12, 13 A qualified health professional should be consulted if memory impairment is suspected. Some older people have greater memory and cognitive difficulties than do those undergoing normal aging, but their symptoms are not so severe as to justify a diagnosis of Alzheimer’s disease. Some of these people go on to develop Alzheimer’s disease; others do not. Authorities have suggested several terms for this middle category, including “mild cognitive impairment”14 and “mild neurocognitive disorder.”15 Risk factors for ARCD include advancing age, female gender, prior heart attack, and heart failure. People with ARCD experience deterioration in memory and learning, attention and concentration, thinking, use of language, and other mental functions. Healthy Lifestyle Tips Cigarette smokers and people with high levels of education appear to have some protection against ARCD.16 The reason for each of these associations remains unknown. However, as cigarette smoking generally is not associated with other health benefits and results in serious health risks, doctors recommend abstinence from smoking, even by people at risk of ARCD. A large, preliminary study in 1998 found associations between hypertension and deterioration in mental function.17 Research is needed to determine if lowering blood pressure is effective for preventing ARCD. A randomized, controlled trial determined that group exercise has beneficial effects on physiological and cognitive functioning, and well-being in older people. At the end of the trial, the exercisers showed significant improvements in reaction time, memory span, and measures of well-being when compared with controls.18 Going for walks may be enough to modify the usual age-related decline in reaction time. Faster reaction times were associated with walking exercise in a British study.19 The results of these two studies suggest a possible role for exercise in preventing ARCD. However, controlled trials in people with ARCD are needed to confirm these observations. Psychological counseling and training to improve memory have produced improvements in cognitive function in persons with ARCD.20, 21, 22 Copyright © 2016 Healthnotes, Inc. All rights reserved. www.healthnotes.com The information presented by Healthnotes is for informational purposes only. It is based on scientific studies (human, animal, or in vitro), clinical experience, or traditional usage as cited in each article. The results reported may not necessarily occur in all individuals. Self-treatment is not recommended for life-threatening conditions that require medical treatment under a doctor's care. For many of the conditions discussed, treatment with prescription or over the counter medication is also available. Consult your doctor, practitioner, and/or pharmacist for any health problem and before using any supplements or before making any changes in prescribed medications. Information expires June 2016.
<urn:uuid:8c476c80-4ef4-460f-8838-97678afe4c26>
seed
The epidemic form of Bovine Spongiform Encephalopathy (BSE) is generally considered to have been caused by a single prion strain but at least two strain variants of cattle prion disorders have recently been recognized. An additional neurodegenerative condition, idiopathic brainstem neuronal chromatolysis and hippocampal sclerosis (IBNC), a rare neurological disease of adult cattle, was also recognised in a sub-set of cattle submitted under the BSE Orders in which lesions of BSE were absent. Between the years of 1988 and 1991 IBNC occurred in Scotland with an incidence of 7 cases per 100,000 beef suckler cows over the age of 6 years. When the brains of 15 IBNC cases were each tested by immunohistochemistry, all showed abnormal labelling for prion protein (PrP). Immunohistological labelling for PrP was also present in the retina of a single case available for examination. The pattern of PrP labelling in brain is distinct from that seen in other ruminant prion diseases and is absent from brains with other inflammatory conditions and from normal control brains. Brains of IBNC cattle do not reveal abnormal PrP isoforms when tested by the commercial BioRad or Idexx test kits and do not reveal PrPres when tested by Western blotting using stringent proteinase digestion methods. However, some weakly protease resistant isoforms of PrP may be detected when tissues are examined using mild proteinase digestion techniques. The study shows that a distinctive neurological disorder of cattle, which has some clinical similarities to BSE, is associated with abnormal PrP labelling in brain but the pathology and biochemistry of IBNC are distinct from BSE. The study is important either because it raises the possibility of a significant increase in the scope of prion disease or because it demonstrates that widespread and consistent PrP alterations may not be confined to prion diseases. Further studies, including transmission experiments, are needed to establish whether IBNC is a condition in which prion protein is abnormally regulated or it is yet a further example of an infectious cattle prion disease. The transmissible spongiform encephalopathies, or prion diseases, are fatal neurodegenerative diseases characterized by the accumulation of a post-translationally modified variant of the host coded Prion protein (PrP). Until recently, only one form of naturally occurring cattle prion disease was recognized. However, extensive testing of sheep and cattle destined for the human food chain have recently revealed the presence of hitherto unsuspected variant forms of transmissible spongiform encephalopathy of cattle [1,2] and also of sheep . Idiopathic brainstem neuronal chromatolysis and hippocampal sclerosis (IBNC) is a disorder of adult cattle which has some clinical similarity to bovine spongiform encephalopathy [4,5]. It was initially recognised from histological examination of cattle brains submitted as part of the UK statutory reporting of BSE suspects . The disease is rare. In the period from 1988 to1991 it occurred at a rate of 7 cases per 100,000 beef suckler cows over the age of 6 years and 2.68 cases per 100,000 dairy cows of the same age . The mean age of onset is 9 years with a range of 4 – 16 years. Most cases have been reported in Scotland and cases have also been diagnosed in England and Wales, but not from outside the UK. Most cases of IBNC occur singly on farms, but two farms have been identified which have experienced two cases each (MJ personal observations). The proportion of IBNC cases detected through the early 1990s was relatively consistent at 12–14% per year of the BSE negative case subset. During the peak of the BSE epidemic, 27 IBNC cases were recognized in Scotland in one year (MJ personal observations). IBNC cases continued to be found in recent years but there has been a fall in absolute numbers within the BSE negative subset. At least some IBNC cases have distinguishing clinical features from BSE and the fall in histological diagnosis of IBNC cases may be a reflection of an increasingly critical appraisal of clinical signs when suspect BSE cases are examined in the field. The pathological lesions of IBNC are distinctive and characterised by four types of histological change . Neuronal degeneration and axonal degeneration involving brainstem and cranial nerve nuclei and radices of cranial nerves, accompanied by a non-suppurative inflammation proportionate to the degenerative changes, are invariably present. In approximately half the cases examined there is a spongiform change involving grey matter of medial and lateral geniculate nuclei, thalamus, hippocampus, striatum and cerebral cortex, together with hippocampal degeneration and sclerosis involving extensive loss of neurons . The spongiform changes of IBNC involve neuroanatomical areas different from those vacuolated in BSE affected brains. Testing for a number of different metabolic disorders, including vitamin B, vitamin E and selenium deficiency, the presence of antigens to Louping ill virus, Aujeszky's disease virus, Borna virus and Bovine Virus Diarrhoea virus failed to show any significant abnormalities (MJ personal observations). Immunohistochemical studies for PrP were initially performed in the mid-1990s on IBNC brain tissues using antibodies raised to murine PrP. No abnormal PrP was detected. Five brains were tested for scrapie associated fibrils by negative stain electron microscopy and were negative (unpublished data). PrP immunohistochemistry tests on brains from IBNC cases performed during the 1990s were done using antibodies of low affinity for bovine PrP and methodologies that are of lower sensitivity than those currently available. Recent re-examination of tissues from IBNC cases using more sensitive labelling methods and antibodies capable of detecting lower levels of bovine PrP, consistently revealed PrP labelling in all cases tested. This present report describes the results of immunohistochemical and biochemical methods for PrP detection in a series of IBNC cases. Sixteen cases of IBNC were retrieved from the pathology archives at the VLA Lasswade laboratory. Cases were from cattle that were between 5 and 15 years of age when killed between 1993 and 2005. To control for inflammatory and degenerative changes and time of tissue preservation in paraffin wax, cases of malignant catarrhal fever, (a herpes virus infection of cattle), encephalic listeriosis, non-suppurative encephalitis, BSE and also, cattle brains with no significant morphological changes were also retrieved from the same archive. These latter cases had also been preserved in paraffin wax since 1992–1994. Tissues of IBNC cases available for immunohistochemistry and, or, biochemistry are listed in table 1. For histology and immunohistochemical testing, whole brains were available from 9 IBNC cases: representative samples of medulla at the obex and cerebellar peduncles, midbrain, thalamus, striatum, cerebellum hippocampus and cereberal cortices were examined. From a single cow, additional tissues of eye, spleen, adrenal gland and lymph node were also examined. From the remaining seven cases only brainstem was available for testing. Table 1. List of frozen and fixed IBNC tissues available for testing The protocol for sampling suspect BSE cases has altered in the UK over time. Consequently, most IBNC cases from the early part of the epidemic lacked samples of frozen tissue for biochemical analyses. From the later part of the epidemic, only medulla was available for biochemical testing: from 2003 whole brains have been routinely frozen and retained. Samples of frozen brain were available from 7 cases on which a diagnosis of IBNC was established, based on the histology of medulla. From only one IBNC brain was half a brain available for biochemistry and half a brain available for histology. The tissues available for examination are listed in Table 1. Histology and immunohistochemistry All brain sections available were stained with haematoxylin and eosin. Additional blocks of tissue of medulla, midbrain and thalamus were impregnated with silver according to Glees and Marsland's modification of Davenport's method for degenerate axons. For immunohistochemistry, paraffin wax embedded tissues were sectioned at 5 μm, mounted on treated glass slides (Superfrost Plus; Menzel-Glaser, City, Germany) and dried overnight at 37°C. Initially, immunohistochemistry carried out in 1993–5 on IBNC cases used the 1B3 antibody, a polyclonal antibody which had been raised in rabbits to scrapie associated fibrils extracted from ME7 infected mouse brain. Methods used for epitope demasking employed only a formic-acid retrieval stage and did not use autoclaving. Subsequently, in 2007, immunohistochemistry was carried out as described by González et al . Briefly, antigen retrieval included immersion of tissue sections in 98% formic acid for 5 min and autoclaving in 0.2% citrate buffer for 5 min at 121°C. After two blocking steps (to quench endogenous peroxidase activity and to remove non-specific tissue antigens), incubation with the primary antibody was carried out overnight at 4°C. Subsequent steps were performed using a commercial immunoperoxidase technique (Vector-elite ABC kit; Vector Laboratories, Peterborough, UK), after which sections were immersed in 0.5% copper sulphate, to enhance immunoperoxidase colour reaction. Finally, sections were counterstained with Mayer's haematoxylin. Seven PrP antibodies were used, all of which were first titrated on BSE infected sheep or cattle brains to determine the effective dilution range. The antibodies, their binding or eliciting sequences, and dilutions used are shown in table 2. All of these antibodies are considerably more sensitive for detecting bovine PrP than were antibodies used for IBNC labelling in 1993–5. Biotinylated antibodies were used for secondary enhancement: goat anti-rat was used to detect antibody R145 and a universal horse anti-mouse/rabbit was used to detect all other antibodies. For antibody controls, omission of the primary antibody and anti-isotype antibodies were also employed. Table 2. Antibodies and dilutions used for immunohistochemistry and for biochemistry and their eliciting or mapped sequences. TeSeE Western blotting Sample extraction was carried out according to the manufacturer's instructions (Bio-Rad (California, USA) TeSeE Western Blot) with several modifications. In brief, brain tissue was ribolysed to give a 20% (w/v) homogenate. The homogenate was then incubated with DNAase (1/10 dilution at concentration of 2.5 mg/ml in 0.19 M MgCl2) at room temperature for 5 minutes. The samples were then digested with 0.3 units/ml proteinase K (Sigma-Aldrich, (Dorset, UK)); 0.3 units/ml is an in-house nomenclature and equivalent to 0.3 μl of Bio-Rad test proteinase K in terms of activity when compared with the TAME test (Pierce) or 4 μl/ml proteinase K (Bio-Rad) for 10 minutes, with digestion stopped by adding 1/25 Pefabloc SC (Fluka-Sigma-Aldrich, Dorset, UK) (46.7 mM in distilled H2O). Following precipitation and centrifugation at 15,000 g for 7 minutes, the pellets were incubated at 100°C for 5 minutes in 100 μl Laemmli solution (with 5% (v/v) beta-mercaptoethanol and 2% (w/v) SDS). A second centrifugation was performed at 15,000 g for 15 min. The supernatants were stored frozen at -20°C overnight. For analysis, the supernatants were heated at 100°C for 5 minutes, loaded on a 12% Criterion XT Bis-tris SDS gel (Bio Rad) and subjected to electrophoresis in NuPAGE running buffer (Invitrogen-California USA) at 200 V for 35 minutes. Proteins were transferred to a PVDF membrane (Bio-Rad) at 115 V for 60 min using NuPAGE transfer buffer. Blots to be exposed to the SHA31 antibody were blocked for one hour with the solution provided by the manufacturers. Where antibodies F99, SAF84 and P4 were used, a blocking buffer of 5% milk powder in PBS supplemented with Tween 20 (PBST) was used. The membranes were incubated for one hour with the primary antibody: either SHA31 (Bio-Rad) 1/10 dilution in PBST, SAF84 (Spi Bio, Paris France) 0.8 μg/ml, P4 (R-BioPharm, Darmstadt, Germany) 0.4 μg/ml, or F99 2 μg/ml (VMRD, Inc. Pullman, Washington state, USA). The membranes were incubated with goat anti-mouse IgG antibody (Bio-Rad) conjugated to horseradish peroxidase diluted 1/10 in PBST. The membranes were visualized by chemiluminescence (ECL; Amersham, UK). For the 4 μl/ml proteinase K concentration the addition of DNAase and pefabloc was omitted. ELISA for determination of PrPres 20% homogenates were prepared as for the Western blot using the Bio-Rad protocol with the modifications described. The pellets obtained were solubilised by incubating at 100°C for 5 minutes in Reagent C. The method for the ELISA was carried out as described by the manufacturers. The absorbance was measured at 450 nm and 620 nm. EIA for determination of aggregated PrP using a ligand based diagnostic test For the determination of aggregated PrP in the absence of proteinase K the Idexx (Maine, USA) HerdChek BSE test was performed on samples according to the manufacturer's instructions with no modifications or deviations. Briefly samples were mixed with the working plate diluents and then loaded on a BSE antigen-capture EIA plate and incubated for 2.5 hours at room temperature. Aggregated PrP was observed using the conjugated anti-PrP antibodies provided with the kit. Absorbance was read at 450 nm and 620 nm. Determination of positive values The cut-off values for the Bio-Rad TeSeE ELISA and Idexx assays were calculated using the mean of the absorbance values of 90 confirmed BSE negative brainstem samples+ 3 standard deviations. This value was calculated as 0.166 Absorbance units (AU) for the TeSeE ELISA and 0.137 AU for the Idexx assay. Mean absorbance value and standard deviation using 0.3 μ/ml proteinase K for the Bio-Rad assay was 0.066 ± 0.033 (n = 90; range 0.022–0.198) and for the HerdChek assay 0.056 ± 0.027 (n = 90). The histology of each IBNC case was reviewed and the lesions seen were as previously described (Figure 1). Severe brainstem neuronal chromatolysis (Figure 1a), often accompanied by nuclear degeneration and occasional amphophilic, intranuclear inclusions, was present in several brainstem nuclei, being consistently present in the red nucleus, vestibular complex, dorsal motor nucleus of the vagal nerve and raphe. In addition, there was severe axonal and myelin degeneration, prominently affecting the radices and roots of cranial nerves (Figure 1b). The degenerative changes of neurons were accompanied by marked gliosis of parallel severity, and proceeding to gemistocytosis and non-suppurative inflammation of meninges and perivascular spaces. Marked spongiform change with neuronal degeneration was present in the midbrain, thalamus, striatum and cortex. Not fully described in previous papers are the different patterns of vacuolation found in IBNC. Three patterns of vacuolation were recognized: firstly a large loculated (foamy) vacuole typically found in the Betz cell layer of the cerebral cortex (Figure 1c) and also in the thalamus. Single or multiple, round or ovoid, grey matter vacuoles of a form and character similar to that of scrapie or BSE were present in the midbrain, thalamus and striatum. These lesions were not present in medulla and were specifically absent from BSE target sites. Thalamic vacuoles were sometimes focally very numerous and sometimes associated with neuronal degeneration. These two forms of vacuolation were present in 6 of 9 cases where whole brain was available for examination. A third vacuolar change, in which the neuropil was pale and showed a diffuse lacy appearance or very small vacuoles was also found (Figure 1d). Sub-total loss of CA1 and CA2 pyramidal neurons and neurons of the dentate gyrus, was present in 3 of the 9 brains where hippocampus samples were available for examination (Figure 1e, f). No significant morphological lesions were recognised in eye or viscera. Figure 1. Histopathology of IBNC. 1a Case 522/05. Chromatolytic and degenerative changes of neurons of the DMNV HE mag ×225. 1b. Case 3382/94. Silver staining (black) showing extensive degeneration of myelinated axons in descending fibres of the radix of the facial nerve. Glees and Marsland mag ×250. 1c Case 3335/94. Loculated and foamy vacuolation of neuropil adjacent to and impinging on pyramidal neurons of the occipital cortex. HE mag ×250. 1d Case 522/05 Fine, lacy, vacuolation of neuropil in striatum. HE mag ×500. 1e Case 3382/94 Loss of neurons and reactive gliosis in the pyramidal neurons of the CA1 sector of the hippocampus HE mag ×210. 1f Case 3382/94 Gemistocytic replacement gliosis in the CA4 sector (dentate gyrus) of the hippocampus. HE mag ×240 The effective operating dilution for each of the antibodies was determined for BSE affected sheep or cattle brains and is listed in Table 2. The efficiency of each antibody for detection of disease specific PrP found in cattle BSE differed with F99, 6C2, SAF84 and 12F10 all producing high intensity labelling. PrP labelling of sections from IBNC cases was obtained with all antibodies but the ranking of sensitivity of detection of PrP in IBNC cases and disease specific PrP in cattle BSE cases was not the same. In particular 6C2, which give strong labelling of disease specific PrP accumulations in BSE affected cattle, gave weak labelling in IBNC cases. F99 and SAF84 labeled some 'dark' neurons in control brains tissues and F99 produces some weak, diffuse neuropil labelling. All other antibodies were 'clean' with no background staining. IBNC cases showed a distinctive pattern of PrP labelling that was present in sections of all IBNC cases but was absent from brains of all infectious or inflammatory conditions, brains with no significant lesions and brains from BSE cases. The same staining patterns were found in material which had been preserved in wax or in fixative for widely differing time intervals; it was reproduced in replicate staining runs and identical patterns were found with different antibodies. Thus, we consider that this labelling represents an abnormal form of PrP labelling. However, immunohistochemical methods alone do not reveal whether such accumulations are due to increased expression or altered distribution of normal forms of the protein, or whether they are PrP accumulations that are abnormal in conformation or aggregation, as they are in BSE and other prion diseases. PrP labelling was detected in all IBNC cases though not in every case at all sites examined (Table 3). F99, SAF84 and L42 antibodies gave the greatest amounts of labelling. In most cases labelling was widespread throughout the brain. In three cases labelling was more or less confined to the striatum and in one of these three, the labelling was restricted to the putamen. HE stained sections of these three cases did not show any large loculated neuropil vacuoles. All other cases gave a wide distribution of labelling including medulla, cerebellum, midbrain, thalamus, striatum, hippocampus and cerebrum. In brainstem sections, labelling was present in the spinal tract nucleus of the trigeminal nerve and in the cerebellum, most labelling involved the cerebellar molecular layer. Table 3. showing animals tested, age, tissues available for biochemical testing and the presence and absence of PrP by immunohistochemistry. Most PrP labelling was found in grey matter (Figure 2b) in the form of globular, ring or in 'C' shaped patterns (Figure 2c). Occasionally, several of these were arranged in a line. Very often this pattern of labelling was present at the rim of small vacuoles (Figure 2c). Some bundles of white matter in which there was strong vacuolation at the grey matter interface were also strongly PrP labelled at this interface with the above pattern. The intensity of PrP labelling was greatest in those cases in which lesions of degeneration and spongiform change were most marked. When the sites of labelling were compared with HE stained sections this pattern of labelling corresponded to the 'lacy' neuropil type of micro-vacuolation, both in specific location within individual sections and in overall brain distribution. The larger loculated or foamy forms of vacuoles and more typical scrapie like vacuoles were not specifically labelled. Figure 2. PrP immunohistochemistry in IBNC. 2a Case 1165/03 A degenerate neuron in the DMNV shows intracytoplasmic labelling for PrP. F99 antibody mag ×550. 2b Case 522/05. PrP labelling is present in grey matter of the striatum. The white matter is unlabelled. F99 antibody mag ×40. 2c Case 3431/93 Detail of grey matter labelling of the caudate nucleus showing the association of labelling with the rims of vacuoles and of the neuropil adjacent to small vacuoles. F99 antibody mag ×480. 2d Case 2786/93 Diffuse labelling of the glomerular zones of neuropil in the cerebellar granule cell layer. F99 antibody mag ×200. 2e Case 486/96 Retina showing PrP labelling of the inner and outer plexiform layers of the sensory retina. F99 antibody mag ×120. In the cerebellum of two cases diffuse labelling was found in the granular layer and neuropil between granule cell neuronal nuclei (Figure 2d). Diffuse labelling of both inner and outer plexiform layers was found in the one eye (Figure 2e) examined but no labelling was found in the limited range of viscera. Rarely, degenerate chromatolytic neurons showed intracytoplasmic labelling (Figure 2a). From 2002 all cattle taken under the BSE Regulations were routinely tested by the BioRad ELISA. The records of 7 IBNC cases from within this period were located. All results were recorded as negative. Repeat testing of samples using standard commercial BioRad and Idexx test kits also gave negative results. The results of BioRad tests using reduced levels of proteinase K are shown in table 4. Of the total of 24 IBNC samples of different brain sites tested, 15 (62.5%) gave values above those of the test kit negative control and also above the BSE negative brain pool control. Seven of these samples (29%) were above the calculated cut off value. Values above and up to 2 times greater than the calculated cut off were found for each case but not for each brain site. Individual samples from two brains (2691/02 and 1165/03) which did not initially show results above the calculated cut off values did so on re-testing (data not shown). Table 4. BioRad Elisa tissues results under conditions of mild protease digestion Western blots were carried out on the samples shown in table 3. In each case no residual protease resistant PrP (PrPres) was found when 20 or 4 μl/ml of proteinase K was used. When 0.12 μl/ml or 0.3 μl/ml of proteinase K was used, a signal was detected on the blots of all samples including those of the negative controls. IBNC samples were indistinguishable from negative controls with digestions of 0.12 μl/ml proteinase K. However, when 0.3 μl/ml of proteinase K was used more residual PrP was detected in IBNC cases than in the controls (Figure 3) and with each of the antibodies tested (only illustrations of F99 are shown). Figure 3. PrP immunoblots on IBNC and BSE cases. Western blot of 5 different IBNC midbrain samples digested with 4 μl (lanes 2,4,6,11, 13) or 0.3 μl/ml (lanes 3,5,7,12,14) of proteinase K. An individual control brain digested with 4 μ (lane 9,) or 0.3 μl/ml (lane10) proteinase K and a control brain (lane 16) and pooled BSE brains (lanes 17) digested with 20 μl/ml of proteinase K are also present. Molecular markers are at lanes 1,8, and 15. In each case 4 μl/ml proteinase K results in complete digestion of PrP. Although residual PrPc is present in control brain each of the IBNC brains gives a stronger signal and multiple labelled bands when developed with the F99 antibody at 15 minutes exposure. This study shows that the novel condition of cattle previously identified as IBNC and recognized from within the BSE suspect submissions, abnormally expresses or accumulates PrP in brain and retina. However, this abnormal PrP is not composed of isoforms that are strongly resistant to protease digestion suggesting that it is not present in the form of large aggregates. Immunohistochemical demonstration of PrP labelling and increased levels of PrP mRNA have previously been described in adult humans affected with acute vascular disorders, in infants with perinatal hypoxia and experimental infarction of rodents . These findings are considered to represent upregulation of PrP expression which encompasses part of the oxidative stress response of neurons. We have also observed increased PrP in the cytoplasm of neurons undergoing ischaemic degeneration in a variety of sheep encephalopathies. Though ischaemic neuronal degeneration is not a feature of IBNC, nevertheless, the presence of PrP within the cytoplasm of some chromatolytic and degenerate neurons of IBNC affected cattle is consistent with the idea that stressed neurons may respond by increasing PrP expression. Though present in only two cows, the pattern of PrP accumulation within the granule cell layer of the cerebellum is morphologically similar to that reported by several authors for Nor 98 types of the transmissible spongiform encephalopathy or prion disease of sheep [3,10]. The PrP accumulation within the plexiform layers of the eye is similar to that of both natural scrapie and of Nor 98 (MJ personal observations). However the majority type of PrP labelling that occurred in all IBNC cases was found within the neuropil, mainly in the rostral neuraxis and cerebrum, the nature of which is previously unreported in cattle or in any other prion disorder. This novel pattern of labelling appears to correspond to a rarefaction or fine microvacuolation of neuropil as seen on standard HE stained sections. Pathological, biochemical and bioassay data all suggest that the epidemic form of cattle BSE is a single strain. However, recent large scale EU wide surveillance for BSE has led to the unexpected discovery of rare and hitherto unknown prion diseases of cattle. Small numbers of atypical forms of cattle prion diseases have now been recognized from several European countries, in the USA and Japan and can be distinguished by histological, molecular and transmission characteristics [12-14]. Bovine Amyloidotic Spongiform Encephalopathy (BASE) was the first of these novel cattle prion disorders to be recognized and was characterized by the presence of numerous small amyloid deposits of abnormal PrP. It was initially discovered in three aged Italian cattle and has subsequently been transmitted to transgenic mice [13,14]. A further variant of a cattle prion disease affecting cows between 8 and 15 years was initially recognized in France and has also been transmitted to mice. BSE and these novel cattle prion diseases can be distinguished using biochemical and molecular methods and are now classified as C, H and L type isolates . H type and L type (BASE) isolates are defined according to the higher and lower positions of the unglycosylated PrPres bands in Western blots, respectively, when compared to the position of the corresponding band in classical BSE (C type) isolates . L type cases formerly classified as BASE, have a distinctive glycopattern in which monoglycoslyated PrPres predominates compared to BSE [12,14]. While IBNC cases are on average older than BSE cases they occupy a similar age class of cattle to that of H and L type cattle prion diseases, but IBNC can be readily distinguished from H, L and C type cattle prion disease by morphologic pathology and by the absence of PrPres under stringent conditions of protease digestion. Not all abnormal PrPres isoforms detected from brains of animals affected with prion disease are resistant to stringent protease digestion. The PrPres of two sheep of the ARR/ARR PrP genotype affected with a classical scrapie-like disease accumulated unusually protease sensitive isoforms of PrP . Similarly, the transmissible prion disease of sheep known as Nor 98 and related conditions (often referred to as atypical scrapie) also have weakly protease resistant PrPres [3,16]. Nor 98 does not appear to transmit readily to other sheep under field conditions: it also does not transmit to conventional mice although it does readily transmit disease to one strain of transgenic mouse which substantially over-expresses the VRQ allele of sheep PrP [16,17]. The transgenic PG14 mouse also has PrPres which is even more readily digested than that found in Nor 98 but this prion protein disorder has not so far been successfully transmitted . The biochemical analyses of limited numbers of IBNC cases clearly shows that highly aggregated forms of protease resistant PrP are not present in brain tissue. However, when the data from the ELISA and immunoblot tests using mild protease digestion are compared with that of normal control material it is possible that smaller aggregates of PrP molecules may be present. The present results indicate that there are changes in PrP expression or accumulation in the neurodegenerative cattle disorder known as IBNC. The pathology and biochemistry of IBNC are quite distinct from that of other prion diseases of cattle and other species but the pathology does include grey matter spongiform changes. The transmissibility of this disorder is undetermined. These results are interesting as they show that either the range of prion diseases and associated pathology is still wider than previously thought or that substantial abnormalities of prion protein expression may be associated with brain lesions unconnected with classical prion diseases. Further biochemical and transmission studies are needed to determine which of these possibilities is correct. MJ, LG and SM, performed the histology and immunohistochemistry. BBP performed the biochemical analyses and both BBP and LT analyzed and interpreted the biochemical studies. MJ drafted the manuscript with contributions from all other authors. The authors are grateful to Robert Higgins and Sandra Scholes for tracing IBNC cases in VLA archive files, to Leigh Thorne and Sally Everest for assistance with biochemical testing and to Jim Hope for critical comment on the manuscript. Jan Langeveld and Katherine ORourke provided the 12B2 and F99 antibodies. We are grateful to Yvonne Spencer and Marion Simmons for confirming the IHC labelling pattern of an IBNC case in their laboratory. Casalone C, Zanusso G, Acutis P, Ferrari S, Capucci L, Tagliavini F, Monaco S, Caramelli M: Identification of a second bovine amyloidotic spongiform encephalopathy: Molecular similarities with sporadic Creutzfeldt – Jakob disease. Vet Rec 2003, 153:202-208. PubMed Abstract Vet Rec 1992, 131:332-337. PubMed Abstract Vet Rec 1992, 131:359-362. PubMed Abstract Vet Rec 1997, 140:260-261. PubMed Abstract Gonzalez L, Martin S, BegaraMcGorum I, Hunter N, Houston F, Simmons M, Jeffrey M: Effects of agent strain and host genotype on PrP accumulation in the brain of sheep naturally and experimentally affected with scrapie. McLennan NF, Brennan PM, McNeill A, Davies I, Fotheringham A, Rennison KA, Ritchie D, Brannan F, Head MW, Ironside JW, Williams A, Bell JE: Prion protein accumulation and neuroprotection in hypoxic brain damage. Amer J Pathol 2004, 165:227-235. PubMed Abstract Jacobs JG, Langeveld JP, Biacabe AG, Acutis PL, Polak MP, Gavier-Widen D, Buschmann A, Caramelli M, Casalone C, Mazza M, Groschup M, Erkens JH, Davidse A, Van Zijderveld FG, Baron T: Molecular discrimination of atypical bovine spongiform encephalopathy strains from a geographical region spanning a wide area in Europe. Capobianco R, Casalone C, Suardi S, Mangieri M, Miccolo C, Limido L, Catania M, Rossi G, Di Fede G, Giaccone G, Bruzzone MG, Minati L, Corona C, Acutis P, Gelmetti D, Lombardi G, Groschup MH, Buschmann A, Zanusso G, Monaco S, Caramelli M, Tagliavini F: Conversion of the BASE prion strain into the BSE strain: the origin of BSE? Buschmann A, Gretzschel A, Biacabe AG, Schiebel K, Corona C, Hoffmann C, Eiden M, Baron T, Casalone C, Groschup MH: Atypical BSE in Germany – proof of transmissibility and biochemical characterization. Groschup MH, Lacroux C, Buschmann A, Luhken G, Mathey J, Eiden M, Lugan S, Hoffmann C, Espinosa JC, Baron T, Torres JM, Erhardt G, Andreoletti O: Classic scrapie in sheep with the ARR/ARR prion genotype in Germany and France. Jeffrey M, Gonzalez L, Chong A, Foster J, Goldmann W, Hunter N, Martin S: Ovine infection with the agents of acrapie (CH1641 Isolate) and bovine spongiform encephalopathy: immunochemical similarities can be resolved by immunohistochemistry. Thuring CM, Erkens JH, Jacobs JG, Bossers A, van Keulen LJ, Garssen GJ, Van Zijderveld FG, Ryder SJ, Groschup MH, Sweeney T, Langeveld JP: Discrimination between scrapie and bovine spongiform encephalopathy in sheep by molecular size, immunoreactivity, and glycoprofile of prion protein. Korth C, Stierli B, Streit P, Moser M, Schaller O, Fischer R, Schulzschaeffer W, Kretzschmar H, Raeber A, Braun U, Ehrensperger F, Hornemann S, Glockshuber R, Riek R, Billeter M, Wuthrich K, Oesch B: Prion (PrPsc)-specific epitope defined by a monoclonal-antibody. Feraudet C, Morel N, Simon S, Volland H, Frobert Y, Creminon C, Vilette D, Lehmann S, Grassi J: Screening of 145 anti-PrP monoclonal antibodies for their capacity to inhibit PrPSc replication in infected cells. Biochem Biophys ResCommun 1999, 265:652-657. Publisher Full Text O'Rourke KI, Baszler TV, Besser TE, Miller JM, Cutlip RC, Wells GAH, Ryder SJ, Parish SM, Hamir AN, Cockett NE, Jenny A, Knowles DP: Preclinical diagnosis of scrapie by immunohistochemistry of third eyelid lymphoid tissue.
<urn:uuid:f98120f1-4423-45e8-ba37-9d2543bd6878>
seed
Time to Educate Your Dermatologist about Diagnosing Skin Cancer Suspicious pigmented lesions or ‘moles’ are a common presenting problem in a GPs Medical Practice. Most lesions are benign however a small minority are malignant melanomas. When a GP recognizes a suspicious mole, caution is taken and patients are referred to a dermatologist. Over the last twenty-five years, the incidence of melanoma has increased more than for any other major cancer in the US at an alarming rate. Worldwide, the incidence of melanoma is increasing faster than any other cancer with an approximate doubling of rates every 10-20 years in countries with white populations. Although the incidence of melanoma increases with age, a third of all cases occur in people aged less than fifty years and it is the second most common cancer in the 20-39 age-group. The increasing incidence has been attributed to increases in UV exposure, both natural and artificial, and to associated advances in early diagnosis. Other risk factors include genetic predisposition, fair complexion, sunburn-susceptible skin types, and family history In 2009, up to nineteen out of every twenty lesions referred to a skin specialist (dermatologist) by a GP under the two-week cancer standard were benign. Alternative approaches are therefore required to increase the precision of assessment of pigmented skin lesions in primary care and therefore reduce the number of unnecessary referrals of benign lesions. This will be beneficial not only in secondary care, but also to reduce unnecessary patient anxiety. The MoleMate system is a small hand-held scanner linked to a computer program, which has been designed for use in general practices to help the doctor or nurse assess a mole. The system uses SIAscopy (a non-invasive scanning technique) to produce images of the mole. The images are then assessed and a decision made as to whether or not the mole should be referred and looked at in more detail by a dermatologist. The FDA approved SIMSYS-MoleMate Skin Imaging System, a non-invasive skin cancer screening procedure, is a significant advance in the early detection of potentially life threatening moles and lesions. Physicians have found the SIMSYS-MoleMate Siascope hand-held device easy to learn and use, and that it rapidly provides accurate images of the pigment, blood, and collagen below the mole or lesion. Now, for the first time, physicians can more accurately evaluate suspicious moles and lesions in a non-invasive, pain-free way. Experts also believe it may reduce the need for time consuming and expensive biopsies. For more information about SIMSYS-MoleMate, MedX Health Corp.
<urn:uuid:8f8d11ae-e91c-4236-ae16-bdadb03fac47>
seed
Scoliosis and the Child's Spine By Peter Fysh, DCScoliosis is defined as an abnormal curvature of the spine greater than 10 degrees in the sideways or coronal plane. Since scoliosis is a physical finding and does not represent a diagnosis, its cause should be investigated in all cases and its classification established prior to the commencement of any treatment program. Scoliosis can be readily detected during a thorough physical examination and many cases of scoliosis are found during routine spinal screenings. Scoliosis screening is such an effective process for locating previously unidentified cases of scoliosis that screenings are becoming a common occurrence in schools. Many school screenings are now carried out by local chiropractors. Examination of a patient for scoliosis requires undressing and careful examination of the entire spine. A scoliosis which is evident with the patient in the standing position, but which disappears when the patient sits, is most commonly classified as a functional scoliosis. A scoliosis which is evident in the standing position and which persists with the patient in the forward bending position is most likely a structural scoliosis. The forward bending test is performed by having the patient flex forward at the waist to 90 degrees with the hands clasped together in-front. With the patient in this forward bent position, alignment of the ribs and vertebral spinous should be evaluated. If a distortion is detected, such as a unilateral rib hump, prominent scapular or obvious deviation of the spine to one side, then x-ray films should be obtained. Any patient who has the signs of apparent scoliosis should have their spine x-rayed to determine the extent of the scoliosis. Scoliosis is evaluated using the following criteria: the angle of the scoliosis, the side to which the curve deviates, the upper and lower vertebrae which form part of the curve and the apex vertebra, i.e., the vertebra which is furthest from the spinal midline. Evaluation of any spinal curvature detected on an x-ray film should be made using either the Cobb method or the Riser-Ferguson method to determine the extent of the curvature. The patient with suspected functional scoliosis should be evaluated for leg-length inequality or pelvic distortion. Frequently, these scoliosis can be corrected by spinal and pelvic adjusting. Congenital scoliosis is associated with failure of appropriate formation of the spine during embryological development. It may be due to specific vertebral anomalies, such as hemivertebrae, or to failure of proper segmentation of the vertebral structures. Congenital scoliosis frequently presents concurrently with other developmental anomalies, such as genitourinary anomalies, cardiac anomalies and spinal cord tethering. The goal of any management program is to prevent the progression of the scoliosis. Classically, bracing has been the method of choice to prevent further progression of the curve. Initially, watching and evaluating the curve, especially small ones, may be appropriate. Also curves may be nonprogressive but this can only be determined by evaluation over a period of 6-12 months. The most common form of scoliosis is an idiopathic scoliosis, which means scoliosis of unknown origin. Idiopathic scoliosis has no associated back pain, therefore any young patient who presents with scoliosis, accompanied by associated back pain, should be evaluated carefully for an alternative cause for their complaint. Idiopathic scoliosis is the most common classification of scoliosis and is a classification which is reserved for scoliosis which cannot be classified into any other category. Idiopathic scoliosis may therefore be considered to be a diagnosis of exclusion. It is more common in females and tends to progress more rapidly during an adolescent growth spurt. Scoliotic curvatures which are less than 25 degrees can be safely treated in the chiropractor's office, without referral for orthopedic opinion. Once the curvature reaches or exceeds 25 degrees, the patient should be referred for possible bracing. Some scoliosic curvatures have more of a tendency to progress than others. Such curvatures are seen in females whose scoliosis developed before the onset of menses, who have not as yet reached skeletal maturity and whose curvature measures 20 degrees or greater. Scoliosis associated with neuromuscular disorders, e.g., cerebral palsy, tend to be progressive and usually require bracing to minimize deterioration. Other Scoliosis Classifications Identifiable scoliosis should be classified according to the following table, as such classification helps to determine not only the cause, but also the likely progression and prognosis. CLASSIFICATION OF SCOLIOSIS Infantile (0-3 years) Juvenile (4 years to puberty) Adolescent (puberty to epiphyseal closure) Click here for more information about Peter Fysh, DC.
<urn:uuid:43f7269a-9abe-49d8-83e6-f3f0f0d17443>
seed
Mycobacterium tuberculosis DNA Fingerprinting from Epidemiologically Linked Case Pairs Authors: Bennett, Diane E.; Onorato, Ida M.; Ellis, Barbara A.; Crawford, Jack T.; Schable, Barbara; Byers, Robert; Kammerer, J. Steve; Braden, Christopher R.; Publisher: CDC Open Access DNA fingerprinting was used to evaluate epidemiologically linked case pairs found during routine tuberculosis (TB) contact investigations in seven sentinel sites from 1996 to 2000. Transmission was confirmed when the DNA fingerprints of source and secondary cases matched. Of 538 case pairs identified, 156 (29%) did not have matching fingerprints. Case pairs from the same household were no more likely to have confirmed transmission than those linked elsewhere. Case pairs with unconfirmed transmission were more likely to include a smear-negative source case (odds ratio [OR] 2.0) or a foreign-born secondary case (OR 3.4) and less likely to include a secondary case less than 15 years old (OR 0.3). Our study suggests that contact investigations should focus not only on the household but also on all settings frequented by an index case. Foreign-born persons with TB may have been infected previously in high-prevalence countries; screening and preventive measures recommended by the Institute of Medicine could prevent TB reactivation in these cases. Investigating persons who have had close contact with tuberculosis (TB) cases is an essential element of public health programs to control and eliminate TB (1,2). These contact investigations are done primarily to discover persons who may require treatment for latent TB infection and also to find and treat additional persons with TB. While not usually highly contagious, TB is generally transmitted to persons who have shared indoor air space frequently or for a long period of time with a person who is infectious (3). Factors that may influence transmission include prolonged hours of contact during the infectious period, close proximity to the person with TB, and lack of ventilation and ultraviolet light in a shared environment. Generally, close contacts who live with a person identified with active TB or who habitually spend time indoors in close proximity to this person are investigated first. If no evidence of TB transmission is found in these close contacts, the investigation ceases. If transmission has occurred, the investigation may be extended. The “stone-in-the-pond” principle, a technique in which concentric circles of contact persons around the case are sequentially investigated, is practiced in many countries
<urn:uuid:75cc4ab8-35b4-48b6-8e56-360b924657c9>
seed
Hypotrichosis simplex (HS) or hereditary hypotrichosis simplex (HHS) is characterized by reduced pilosity over the scalp and body (with sparse, thin, and short hair) in the absence of other anomalies. Prevalence is unknown but numerous large pedigrees with several affected members have been described. Both men and women are equally affected. Hair loss is diffuse and progressive and usually begins during early childhood. Body hair may also be sparse with variable involvement of the eyebrows, eyelashes, and pubic and axillary hair. There are no anomalies of the skin, nails or teeth. A scalp-limited form, hypotrichosis simplex of the scalp (see this term) has also been reported with mutations in the corneodesmosin (CDSN) gene. Both autosomal dominant and recessive modes of transmission have been reported for HHS.Autosomal dominant HHS affecting both scalp and body hair has been reported in one Italian and two Pakistani families as due to mutations in the APCDD1 gene mapped to 18p11.22. Three clinically similar forms of localized autosomal recessive hypotrichosis (LAH1, LAH2 and LAH3) have been identified within the last years. The locus for LAH1 (involving mainly the hair of the scalp, chest, arms and legs) has been mapped to 18q12.1 and mutations in the desmoglein-4 (DSG4) gene have been identified. The locus for LAH2 (characterized by sparse or absent scalp, axillary and body hair, and sparse eyebrows and eyelashes) has been mapped to 3q27.3 and mutations in the lipase-H (LIPH) gene have been identified. The locus for LAH3 (characterized by progressive loss of scalp hair, sparse body hair, and normal eyebrows, eyelashes, and pubic and axillary hair) has been mapped to 13q14.11-q21.32 and mutations have been identified in a G protein-coupled receptor gene (P2RY5). These receptors belong to the group of lipophosphatidic acid (LPA) receptors and are therefore designated as LPAR6. There is no treatment for hypotrichosis simplex available to date. Last update: June 2010
<urn:uuid:7619315a-d946-4e4a-9128-3020cc5132cf>
seed
A potential treatment for HIV may one day help people who are not responding to Anti-Retroviral Therapy, suggests new research published April 1 in The Journal of Immunology. Scientists looking at monkeys with the simian form of HIV were able to reduce the virus levels in the blood to undetectable levels, by treating the monkeys with a molecule called D-1mT alongside Anti-Retroviral Therapy (ART). Simian Immunodeficiency Virus (SIV) is very similar to Human Immunodeficiency Virus (HIV) and it is used to study the condition in animal models. In both HIV and SIV, the level of virus in the blood, or 'viral load', is important because when the viral load is high, the disease progresses and it depletes the patient's immune system. This eventually leads to the onset of Acquired Immune Deficiency Syndrome (AIDS), where the patient cannot fight infections which would be innocuous in healthy individuals. Currently, the 'gold standard' treatment for HIV is Highly Active Anti-Retroviral Therapy (HAART), a cocktail of drugs that reduces the viral load by stopping the virus from replicating. HAART can increase the life expectancy of an HIV-positive patient substantially if it works well. However, the treatment is not effective for around one in ten patients, partly because some develop resistance to the drugs used in HAART. The researchers, from Imperial College London, the National Cancer Institute, Bethesda, and Innsbruck Medical University, hope their study could ultimately lead to a new treatment that will help HAART to work more effectively in these people. In the new study, researchers gave daily doses of a modified amino acid called D-1mT to 11 rhesus macaques infected with SIV. All of the macaques had been treated with ART for at least four months. Eight of the macaques had higher viral loads (reaching up to 100,000 copies of the virus per millilitre of blood), because they were not responding completely to the treatment. However, three had undetectable viral loads (fewer than 50 copies of the virus per millilitre of blood), because ART was working well. The researchers took blood samples at six and 13 days. After six days, only three of the macaques had detectable SIV levels and after 13 days the virus could only be found in two of them, at very low levels (below 1,000 copies of the virus per millilitre of blood). The researchers repeated the research in eight macaques that were not being treated with ART but this time they found no change in viral load over 13 days. Dr Adriano Boasso from Imperial College London said: "HIV can have a devastating effect on people's lives but with advances in Anti-Retroviral Therapy it is becoming a more chronic, manageable disease. Unfortunately, treatment does not work for everyone - some people develop resistance to the drugs and when that happens, we start to run out of options for treating them and delaying the onset of AIDS. "Our early findings suggest that D-1mT could be used alongside antiretroviral therapy to stop the virus from replicating. The disease can only progress if the virus is replicating, so if we can slow replication down we can reduce the impact of the disease on the patient's life. We still need to figure out how D-1mT is working, then we can think about developing this as a potential treatment for HIV," added Dr Boasso. The results of the new study surprised the researchers because D-1mT did not appear to work in the way they had expected. They had believed it might reactivate the immune system, because D-1mT is able to block an enzyme called IDO, which HIV and SIV use to hold the immune system back. In healthy people, IDO prevents the immune system from attacking the body. HIV and SIV hijack the machinery that makes IDO and use it to stop the immune system from attacking them. In the new study, the researchers could find no evidence that D-1mT reactivated the immune response against SIV, although they do not exclude this possibility. They are now keen to carry out further research to explore how D-1mT is working. "The effect D-1mT seemed to have on viral load was really encouraging but it was a surprise to us - we didn't expect D-1mT to work only in macaques that were already being treated with ART. It seems that D-1mT synergises with ART and we would really like to find out how this works," said Dr Boasso. In healthy people, the IDO enzyme controls allergic reactions and autoimmune diseases and it also stops the foetus from being rejected in pregnancy. As D-1mT blocks IDO, the researchers say that its effects may need to be tested in SIV-infected macaques over a longer time, to determine if taking the drug could increase the risk of these conditions. D-1mT is currently in Phase I clinical trials to test its safety and potential efficacy as a treatment for cancer, which should indicate whether the drug is suitable for treating human patients. The researchers hope that if D-1mT proves safe in the initial trials for cancer and shows further promise for treating HIV, trials for using D-1mT as a treatment for HIV could begin as early as 5 years from now. The research was funded by the Intramural Research Program of the Center for Cancer Research, National Cancer Institute, National Institutes of Health (Bethesda, MD, USA). Cite This Page:
<urn:uuid:03b31ee7-c3a2-407c-b4e0-dedf1997f11f>
seed
About 1,200 children and adolescents become daily smokers each day. In 2006, an estimated 3.3 million U.S. adolescents had used tobacco within the previous month. Compared to older populations in the United States, adolescents and young adults have the highest smoking prevalence. Although quitting benefits are greatest for those who quit at younger ages, numerous studies conducted in the 1990s found that counseling approaches and medications effective for adult quitters are not as effective for youth. A 2006 national survey funded by the Robert Wood Johnson Foundation (RWJF) and the Center for Disease Control and Prevention's Office on Smoking and Health found that more than 60 percent of adolescent and young adult smokers had tried to quit within the previous year. Since RWJF first entered the tobacco-cessation field in the early 1990s, reducing tobacco among youth has been one of its major goals. The cessation effort has focused on identifying how and why youth begin smoking and how they progress from occasional smokers to daily smokers and on developing the best treatment methods for helping youth quit. What Is Known About Smoking Among Youth and Young Adults - Influences on youth quitting. Peers, family, individual attributes and external environment all influence the likelihood that kids will quit. (This factsheet from the Youth Tobacco Cessation Collaborative details the research on youth tobacco influences.) - Young smokers try to quit as often or more often than adult smokers. The RWJF-funded National Youth Smoking Cessation Survey found that more than 80 percent of young smokers want to quit. About three-quarters have tried to quit at least once and failed. (See this factsheet from the collaborative for more information on young smokers' quit attempts.) - Most young smokers don't use effective treatments when they try to quit. (See news brief on a summer 2007 article in the American Journal of Public Health about a study of young adult smokers' quit attempts.) - Treatments that are effective for adult smokers are not as appealing and effective for teen and young adult smokers. The 1996 and 2001 USPHS clinical practice guidelines panels did not recommend any counseling or pharmacotherapeutic treatment methods for youth. Key RWJF-Sponsored Initiatives: Research - Substance Abuse Policy Research Program (SAPRP) (1994–2010) and its predecessor, the Tobacco Policy Research and Evaluation Program (TPREP) (1992–96) have supported policy relevant, peer-reviewed research that increases understanding of policies for reducing harm caused by substance abuse, including tobacco use. These programs provided seminal findings showing the beneficial effects of tobacco tax and price increases, especially on young smokers, as well as on anti-smoking media campaigns and smoke-free air laws on smoking prevention and cessation. They also documented the synergistic effects of comprehensive and combined public health tobacco-control policies on population and treatment use. Results for youth smoking initiation and cessation can be found in SAPRP knowledge assets and reports ("Increasing the Use of Smoking Cessation Treatments," "Cigarette Taxes and Pricing" and "Research Agenda for Achieving a Smoke-Free Society"). (See Program Results on SAPRP, Program Results on TPREP and SAPRP Knowledge Assets.) - Tobacco Etiology Research Network (TERN) (1996–2006) was a transdisciplinary research network that focused on the causes and progression of tobacco use and dependence and on processes of tobacco use initiation and cessation in youth and young adults. TERN brought researchers together from a variety of fields to study the origins of tobacco dependence. - Bridging the Gap/ImpacTeen (1997–2012) co-directed by Frank Chaloupka, P.H., and Lloyd Johnson, Ph.D., is an interdisciplinary research program that has examined the links between youth behavior (including smoking) and national state and local policy, economic and social factors. - Helping Young Smokers Quit (2001–09) was a two-phase program that surveyed the growing number of existing adolescent tobacco-cessation programs to identify major program offerings, both promising and potentially harmful treatment practices, and the resources/resource constraints in the real world settings in which they are offered. While a growing number of teen cessation programs are available, little has been known about: Moreover, only a handful of such programs have been evaluated. In its second phase, the program disseminated effective, developmentally appropriate cessation programs for adolescent smokers. (See Advocacy & Communications, below.) - How many programs exist - Where they are located - What services they offer - What populations they serve - How they provide treatment - The National Youth Smoking Cessation Survey (started in July 2003, findings released in July 2006) co-led by Gary Giovino, PhD, and Dianne Barker, MPA, and co-funded by RWJF, the Centers for Disease Control and Prevention, and the National Cancer Institute, was a two-year longitudinal telephone survey that asked smokers aged 16 to 24 about their cessation activity. Findings have provided national estimates of quitting activity, have clarified factors associated with quitting among adolescents and young adults, and clarified youth preferences for different types of treatment. (See the report on rwjf.org.) Key RWJF-Sponsored Initiatives: Action to Put Research Into Practice - The Campaign for Tobacco-Free Kids (started by RWJF in 1996 and ongoing) advocates for policies and programs that prevent tobacco use initiation and promote cessation among youth and young adults. (See Program Results and the campaign's website.) - The Youth Tobacco Cessation Collaborative (YTCC) (1998–2008). In 1998, the major U.S. funders of tobacco-control research, program and policy initiatives (the U.S. Centers for Disease Control and Prevention, the Legacy Foundation, National Cancer Institute, National Institute on Drug Abuse and RWJF) joined forces to establish and fund the YTCC to accelerate progress in helping young people quit tobacco use. The ambitious goal of the YTCC was to ensure that every young tobacco user (aged 12 to 24) had access to appropriate and effective cessation interventions by the year 2010. The YTCC was formed to eliminate unplanned duplication of effort and to ensure, through their collective efforts, that the full range of key gaps would be addressed. (See Program Results and an RWJF abstract about the YTCC and its National Blueprint for Action.) Key RWJF-Sponsored Initiatives: Advocacy & Communications Around What Works - After conducting research on effective, developmentally appropriate cessation programs for adolescent smokers, Helping Young Smokers Quit (2001–09) (see above), in its second phase, addressed the critical need to disseminate information about them. In addition, researchers developed evaluation tools for use by any youth-oriented quit-smoking program. - A meeting of experts explored how regulations affect youth smoking-cessation research. (See Program Results.) - A meeting of tobacco-control donors to explore new ways to involve philanthropies in tobacco-control initiatives targeted at youth. (See Program Results.) Other Related Resources Funded by RWJF - A study comparing the use of smoking-cessation treatments among young adult smokers and older adults smokers. (See RWJF abstract; journal article in the American Journal of Public Health available from the abstract.) - Research that found that young adults do not take advantage of proven smoking-cessation treatments that can double their chances of quitting. (See RWJF news brief.) - A pilot study examined the efficacy of a motivational interviewing intervention for adolescent smokers. (See Program Results.) - A study that found that bupropion, an FDA-approved cessation medication is effective in helping teen smokers quit. (See Program Results.) - A study that indicates that evidence-based tobacco-cessation treatments are underused by young adult smokers. (See RWJF Research Highlight.) - A conference whose participants considered methodological issues in studying adolescent use of tobacco-cessation treatment. (See Program Results.)
<urn:uuid:5c9a6c53-b322-4b80-954e-9d791a59c1b7>
seed
The thyroid gland produces two related hormones, thyroxine (T4) and triiodothyronine (T3) (Fig. 341-1). Acting through thyroid hormone receptors α and β, these hormones play a critical role in cell differentiation during development and help maintain thermogenic and metabolic homeostasis in the adult. Autoimmune disorders of the thyroid gland can stimulate overproduction of thyroid hormones (thyrotoxicosis) or cause glandular destruction and hormone deficiency (hypothyroidism). In addition, benign nodules and various forms of thyroid cancer are relatively common and amenable to detection by physical examination. Structures of thyroid hormones. Thyroxine (T4) contains four iodine atoms. Deiodination leads to production of the potent hormone triiodothyronine (T3), or the inactive hormone reverse T3. The thyroid (Greek thyreos, shield, plus eidos, form) consists of two lobes connected by an isthmus. It is located anterior to the trachea between the cricoid cartilage and the suprasternal notch. The normal thyroid is 12–20 g in size, highly vascular, and soft in consistency. Four parathyroid glands, which produce parathyroid hormone (Chap. 353), are located posterior to each pole of the thyroid. The recurrent laryngeal nerves traverse the lateral borders of the thyroid gland and must be identified during thyroid surgery to avoid injury and vocal cord paralysis. The thyroid gland develops from the floor of the primitive pharynx during the third week of gestation. The developing gland migrates along the thyroglossal duct to reach its final location in the neck. This feature accounts for the rare ectopic location of thyroid tissue at the base of the tongue (lingual thyroid) as well as the occurrence of thyroglossal duct cysts along this developmental tract. Thyroid hormone synthesis normally begins at about 11 weeks' gestation. Neural crest derivatives from the ultimobranchial body give rise to thyroid medullary C cells that produce calcitonin, a calcium-lowering hormone. The C cells are interspersed throughout the thyroid gland, although their density is greatest in the juncture of the upper one-third and lower two-thirds of the gland. Calcitonin plays a minimal role in calcium homeostasis in humans but the C-cells are important because of their involvement in medullary thyroid cancer. Thyroid gland development is orchestrated by the coordinated expression of several developmental transcription factors. Thyroid transcription factor (TTF)-1, TTF-2, and paired homeobox-8 (PAX-8) are expressed selectively, but not exclusively, in the thyroid gland. In combination, they dictate thyroid cell development and the induction of thyroid-specific genes such as thyroglobulin (Tg), thyroid peroxidase (TPO), the sodium iodide symporter (Na+/I, NIS), and the thyroid-stimulating hormone receptor (TSH-R). Mutations in these developmental transcription factors or their downstream target genes are rare causes of thyroid agenesis or dyshormonogenesis, though the causes of most forms of congenital hypothyroidism remain unknown (Table 341-1)...
<urn:uuid:c5c13d89-e369-44c1-bb84-4d64d3f4f9ac>
seed
Dry socket, also termed alveolar osteitis is a well recognised complication of tooth extraction. It is characterised by increasingly severe pain in and around the extraction site, usually starting on the second or third post-operative day and which may last for between ten and forty days. The pain may radiate and typically pain in the ear is one of the symptoms of a dry socket in the mandible. The normal post-extraction blood clot is absent from the tooth socket(s), the bony walls of which are denuded and exquisitely sensitive to even gentle probing. Halitosis is invariably present. The condition probably arises as a result of a complex interaction between surgical trauma, local bacterial infection and various systemic factors. There is great variation in reported incidence rates (1%-65%) between series usually due to inconsistency in diagnostic criteria, variation in microbial prophylaxis and study sample heterogeneity. The true incidence rate probably lies somewhere between 3% and 20% of all extractions with lower pre-molar and molar sockets most commonly involved. These guidelines are intended to assist in the prevention and management of the condition. 1. RISK FACTORS 1.1 Extraction of mandibular rather than maxillary teeth. 1.2 Extraction of third molars especially impacted lower third molars. 1.3 Singleton extractions. 1.4 Traumatic and difficult extractions. 1.5 Female sex especially if using oral contraception. 1.6 Poor oral hygiene and plaque control. 1.7 Active or recent history of acute ulcerative gingivitis or pericoronitis associated with the index tooth(teeth). 1.8 Smoking, especially if > 20 cigarettes per day. 1.9 Increased bone density either locally or generally (eg Paget's disease and osteopetrosis). 1.10 Previous history of dry sockets following extractions. 2. PREVENTIVE MEASURES 2.1 A comprehensive history with identification of risk factors. 2.2 Wherever possible pre-operative oral hygiene measures to reduce plaque levels to a minimum should be instituted. 2.3 Where the clinical history and/or radiographic examination suggests a particularly difficult extraction consideration should be given to an elective trans-alveolar approach. 2.4 All extractions should be completed with the minimum amount of trauma, the maximum amount of care and as rapidly as possible commensurate with the degree of difficulty and experience of the operator. If the extraction is beyond the capability of the clinician then the patient should be referred to an appropriate capable clinician. 2.5 Avoid extracting lower third molars in the presence of active infection or ulcerative gingivitis. 2.6 For difficult lower third molar bony impactions, for immunocompromised patients and for patients with a history of previous pericoronitis or ulcerative gingivitis, appropriate antibiotic prophylaxis should be administered. 2.7 Patients who smoke should be enjoined to cease the habit pre-operatively and for at least two weeks post-operatively whilst the socket(s) heals. 2.8 Wherever possible, for female patients using the oral contraceptive extractions should be performed during days 23 through 28 of the tablet cycle. 2.9 Patients should be advised to avoid vigorous mouth rinsing for the first 24 hours post extraction and to use gentle tooth brushing and mouth rinses for 7 days post-extraction. 2.10 Patients should be advised to return to the surgery/hospital immediately if they develop increasing pain or halitosis. 2.11 Pre- and post-operative verbal instructions should be supplemented with written advice to ensure maximum compliance. 3. DIAGNOSTIC CRITERIA 3.1 Severe and persistent pain arising 24 - 48 hours following tooth extraction localised to the extraction socket(s) which is(are) sensitive to even gentle probing. Typically the pain radiates to the ear with mandibular 3.2 Absence of a normal healthy post-extraction blood clot in the socket(s) which may be empty or contain fragments of disintegrating blood clot. 4.1 All patients with signs and symptoms suggestive of a possible dry socket should be reviewed immediately by the operating clinician. 4.2 If appropriate patients should be x-rayed to exclude the possibility of retained fragments of tooth or foreign body. 4.3 The affected socket(s) should be gently irrigated with 0.12% warmed chlorhexidine and all debris dislodged and aspirated. In extremely painful cases local anaesthesia may be required and in this instance regional nerve blocks should be employed wherever possible. 4.4 The socket should be lightly packed with a dressing that contains an obtundant for pain relief and a non-irritant antiseptic to inhibit bacterial and fungal growth. The dressing should prevent the accumulation of food debris and protect the exposed bone from local irritation. Ideally the dressing should resorb and should not excite a host inflammatory or foreign body response. 4.5 Appropriate analgesics should be prescribed. Members of the Non Steroidal Anti-inflammatory Group of drugs are recommended provided there are no individual medical contraindications for their use. 4.6 Patients' progress should be reviewed the following day but they should be informed to return sooner if problems worsen in the intervening period. Admission to hospital is rarely required. 4.7 Steps 4.3 and 4.4 should be repeated as frequently as necessary to keep the patient comfortable and pain free. Analgesic efficacy should be reviewed and analgesic regimes altered appropriately. When it is considered that socket dressings are no longer required the patient can be instructed in home socket irrigation techniques using an appropriate appliance and 0.12% chlorhexidine. 4.8 Patients should be kept under review until they are pain free and socket healing is ensured. Aaratunga NA, Senaratne CM. A clinical study of dry socket in Sri Lanka. [Review] BJOMS. 1988;26 Birn H. Aetilogy and pathogenesis of fibrinolytic alveolitis ("Dry Socket") IJOS. 1973;2 Catellani JE. Review of factors contributing to dry socket through enhanced fibrinolysis. [Review] JOS. 1979;37 Catallani JE, Harvey S, Erickson SH, Cherkin D. Effect of oral contraceptive cycle on dry socket (localised alveolar osteitis). Journal of the American Dental Association. 1980;101 Chapnick P, Diamond LH. A review of dry socket: a double blind study on the effectiveness of clindamycin in reducing the incidence of dry socket [see comments]. [Review] Journal of the Canadian Dental Association. 1992;58 Fazakerley M Field EA. Dry socket: a painful post-extraction complication (a review). [Review] Dental Update 1991;18 Garibaldi JA Greenlaw J, Choi J, Fotovatjah M. Treatment of post-operative pain. Journal of the California Dental Association. 1995;23 Killey HC, Sewrad JR, Kay LW. An outline of Oral Surgery, Part 1. Bristol: Wright, 1975: 124 - 127 Larsen PE. The effect of chlorhexidine rinse on the incidence of alveolar osteitis following the surgical removal of impacted third molars. Journal of Oral and Maxillofacial Surgery. 1991; 49 Loukota RA. The effect of pr-operative perioral skin preparation with aqueous povidone-iodine on the incidence of infection after third molar removal. BJOMS. 1991;29 Meechan JG, Macgregor ID, Rogers SN, Hobson RS, Bate JP, Dennison M. The effect of smoking on immediate post extraction socket filling with blood and on the incidence of painful socket. BJOMS. 1988; 26 Piecuch JF, Arzadon J, Lieblich SE. Prophylactic antibiotics for third molar surgery. Journal of Oral and Maxillofacial Surgery. 1995;53 Ragno JR, Szkutnik AJ. Evaluation of 0.12% chlorhexidine rinse on the prevention of alveolar osteitis. Oral Surgery, Oral Medicine, Oral Pathology. 1991;72 Trieger N, Schlagel GD. Preventing dry socket. A simple procedure that works. Journal of the American Dental Association. 1991; 122 Swanson AE. Prevention of dry socket: an overview. [Review] Oral Surgery, Oral Medicine, Oral Pathology. 1990; 70 Top of Page
<urn:uuid:630553eb-b2d3-4c2b-a122-d8e13d5a0f14>
seed
Photograph by Karen Kasmauski AIDS wasn't discovered until the early 1980s, when doctors in the United States noticed clusters of patients suffering from highly unusual diseases. First seen in gay men in New York and California, these illnesses included Kaposi's sarcoma, a rare skin cancer, and a type of lung infection carried by birds. Soon cases were also detected in intravenous drug users and recipients of blood transfusions. By 1982 the illness had a name—acquired immune deficiency syndrome. AIDS has since killed around 25 million people worldwide, orphaning 12 million children in Africa alone. AIDS is triggered by a virus acquired through direct contact with infected body fluids. The virus causes an immune deficiency by attacking a type of white blood cell that helps to fight infections. Because this leads to various diseases, not a single illness, AIDS is referred to as a syndrome. The virus is called HIV (human immunodeficiency virus). Unprotected sex is HIV's main route into humans, where it targets the white blood cell known as CD4. The virus replicates inside, eventually bursting out and flooding the body in the billions. The immune system then kicks in, and the body and the virus wage all-out war. During the height of battle billions of CD4 cells can be destroyed in a single day. As the cell count drops, the immune system begins to fail and opportunistic infections such as tuberculosis take hold. AIDS is thought to have originated in Africa, where monkeys and apes harbor a virus similar to HIV called SIV (simian immunodeficiency virus). Scientists believe the illness first jumped to humans from wild chimpanzees in central Africa. How the disease crossed the species barrier remains a puzzle. The leading theory is that it was picked up by people who hunted or ate infected chimpanzees. Researchers have dated the virus in humans to about 1930 using scientific estimates of the time it's taken for different strains of HIV to evolve. AIDS today is a global pandemic affecting every country. In 2006, an estimated 39.5 million people had HIV/AIDS. Almost three million of them died. The region most devastated by the disease is sub-Saharan Africa. It accounts for two-thirds of the world's HIV cases and nearly 75 percent of deaths due to AIDS. Infection rates vary, with southern African countries worst affected. In South Africa, an estimated 29 percent of pregnant women have HIV. Infection rates in Zimbabwe's adult population exceed 20 percent, while in Swaziland a third of adults are HIV-positive. Poverty, inadequate health care and education, and promiscuity have all been highlighted to explain Africa's AIDS nightmare. Treatments But No Cure Efforts to prevent the spread of AIDS focus on sex education and the use of condoms. Other measures, such as male circumcision, may also help to cut the risk of sexually transmitted infection. There is no cure for AIDS, but treatments are available that combat its onset. Antiviral drugs work by slowing the replication of HIV in the body. These drugs need to be used in combination because the virus readily mutates, creating new and often drug-resistant strains. Such treatments are expensive, however, and are still denied to millions of people in the developing world. In the future, the hope is for an AIDS vaccine that would prevent HIV infection. Researchers are currently working on more than 30 potential candidates. National Geographic in the Field National Geographic's Missions programs help inspire people to care about the planet. Project Masiluleke is designed to harness the mobile phone as a high-impact, low-cost tool in the fight against HIV/AIDS and TB. Where is the next disease pandemic lurking? Will it find us … or will we find it? Dr. Nathan Wolfe fights worldwide epidemics with an innovative, untraditional approach—he's working to create an early warning system that can forecast and contain new plagues before they kill millions. Phenomena: A Science Salon National Geographic Magazine Our genes harbor many secrets to a long and healthy life. And now scientists are beginning to uncover them All the elements found in nature—the different kinds of atoms—were found long ago. To bag a new one these days, and push the frontiers of matter, you have to create it first. Burn natural gas and it warms your house. But let it leak, from fracked wells or the melting Arctic, and it warms the whole planet.
<urn:uuid:bb7b78fe-5215-4085-8abd-5d2afbe926f8>
seed
Salivary glands are located beneath your tongue and over your mandible near your ear. Their purpose is to secrete saliva into your mouth to begin the digestive process and to protect your teeth from decay. The main salivary glands (parotid glands) are located over your main chewing muscle (masseter muscle), beneath your tongue (sublingual gland), and on the floor of your mouth (sub mandibular gland). A salivary gland biopsy involves the removal of cells or small pieces of tissue from one or more salivary glands in order to be examined in the laboratory. If a mass is discovered in the salivary gland, your doctor may decide that a biopsy is necessary in order to determine whether you have a disease requiring treatment. Your doctor may recommend the biopsy in order to: - examine abnormal lumps or swellings in the salivary glands — these may be caused by an obstruction or tumor - determine if a tumor is present — this condition will require further tests to distinguish the type of tumor and how it will be removed and treated - determine if the gland needs to be removed — this may be necessary if a duct in the salivary gland has become blocked or if a malignant tumor is present - diagnose diseases such as Sjögren syndrome, a chronic autoimmune disorder in which the body attacks healthy tissue namely the salivary glands. There are little or no special preparations required before a salivary gland biopsy. Your doctor may ask that you refrain from eating or drinking anything for a few hours prior to the test. You may also be asked to stop taking blood-thinning medications such as aspirin or warfarin (Coumadin) a few days before your biopsy. This test is usually administered in the doctor’s office. It will take the form of a needle aspiration biopsy. This enables the doctor to remove a small number of cells while barely affecting your body. First, the skin over the selected salivary gland is sterilized with rubbing alcohol. A local anesthetic is then injected to kill the pain. Once the site is numb, a fine needle is inserted into the salivary gland and a small piece of tissue is carefully removed. The tissue is placed on microscopic slides, which are then sent to the laboratory to be examined. If your doctor is testing for Sjögren syndrome, a biopsy will be taken from several salivary glands and may require stitches will be placed at the site of the biopsy. In this case, the salivary gland tissue is determined to be healthy and there will be no diseased tissue or abnormal growths. Swelling of the salivary glands: There are a number of conditions that can cause swelling of the salivary glands: - salivary gland infections - some forms of cancer - salivary duct stones Your doctor will be able to determine which condition is causing the swelling by the results of the biopsy, as well as the presence of other symptoms. He may also recommend an X-ray or CT scan, which will detect any obstruction or tumor growth. Salivary gland tumors: Salivary gland tumors are rare. The most common form is a slow-growing, benign (noncancerous) tumor that causes the size of the gland to increase. Some tumors, however, may be malignant (cancerous). In this case, the tumor is usually a carcinoma. Sjögren syndrome: This is an autoimmune disorder, the origin of which is unknown. It causes the body to attack healthy tissue. It is most common among women ages 40 to 50. Needle biopsies do carry a minimal risk of bleeding and infection at the point of insertion. You may experience mild pain for a short while after the biopsy, though this can be alleviated with over-the-counter pain medication. If you experience any of the following symptoms, you should call your doctor: - pain at the site of the biopsy that cannot be managed by medication - swelling at the site of the biopsy - drainage of fluid from the biopsy site - bleeding that you cannot stop with mild pressure You should seek immediate medical attention if you experience any of the following symptoms: Salivary Gland Tumors If you have been diagnosed with Sjögren syndrome, depending on your symptoms, your doctor will prescribe medication to help you manage the disorder.
<urn:uuid:46997555-8de2-4370-ba25-67d6e54ce1a1>
seed
For release: Tuesday, October 16, 1990 Scientists at the National Institute of Neurological Disorders and Stroke (NINDS) today presented evidence that multiple sclerosis (MS) is a progressive disease even in its earliest stages. Until recently, MS was thought to be active only during attacks. Now, using magnetic resonance imaging (MRI), NINDS scientists have shown that, even when patients' symptoms are stable, their MS can be active: clinically silent lesions occur in their brains frequently and continuously throughout the course of the disease. The findings were presented today at the 115th Annual Meeting of the American Neurological Association in Atlanta, Georgia. Six patients with early MS were followed over 12 months in a study conducted in the NINDS Neuroimmunology Branch by Jonathan O. Harris, M.D. Dr. Harris used gadolinium to "enhance" monthly MRI scans of the patients' brains. Gadolinium, a chemical compound used to increase the contrast between tissues, doesn't ordinarily enhance MRI brain images because it is unable to cross the blood/brain barrier, a protective shield that controls the passage of substances from the blood into the central nervous system. But when the scientists scanned gadolinium-injected patients, white spots appeared on the scans, showing that the gadolinium had gained access to the brain. "This phenomenon is probably due to a breakdown of the blood/brain barrier early in the development of MS lesions," said Branch Chief Dale E. McFarlin,M.D., "and it makes MRI a highly sensitive method for both detecting MS lesions and monitoring their evolution. Furthermore, because the spots come and go over time, the barrier probably closes at some point; only new and active lesions enhance." According to Dr. McFarlin, these findings have significant implications for treating the disease more effectively. "It looks like we've been treating the wrong patients at the wrong time, which may be why the treatments haven't been very successful," said Dr. McFarlin. "We hope to begin therapeutic trials of patients with very early MS in the near future. We also plan to examine the relationship between disease activity and the onset of symptoms through the use of MRI." In the past, physicians have been reluctant to treat patients with early MS because the side effects of most MS drugs can be worse than the symptoms of the disease, which often takes a relatively benign early course. MRI may solve this problem by allowing scientists to monitor the effect of treatment: they will be able to estimate the degree of disease activity by viewing the number, size, and location of lesions. In the first study to compare an MS patient's brain tissue with MRI scans taken 2 and 4 weeks prior to death, the scientists found evidence correlating the intense disease activity shown on the scans with damage to the tissue. The National Institute of Neurological Disorders and Stroke, one of 13 of the National Institutes of Health in Bethesda, Maryland, is the primary supporter of brain and nervous system research in the United States. Last Modified July 17, 2015
<urn:uuid:f9bd0c46-0303-4b46-a3ef-cb52ab89f209>
seed
Toggle: English / Spanish Osler-Weber-Rendu syndrome is a disorder of the blood vessels that can cause excessive bleeding. Hereditary hemorrhagic telangiectasia; HHT Osler-Weber-Rendu syndrome is inherited, which means it is passed down through families. Scientists have identified 4 genes involved in this condition. All of these genes appear to be important for blood vessels to develop properly. People with Osler-Weber-Rendu syndrome can develop abnormal blood vessels in several areas of the body. These vessels are called arteriovenous malformations (AVMs). If they are on the skin, they are called telangiectasias. The abnormal blood vessels can also develop in the brain, lungs, liver, intestines, or other areas. Symptoms of this syndrome include: Exams and Tests An experienced health care provider can detect telangiectases during a physical examination. There is often a family history of this condition. Genetic testing is available to look for changes in genes associated with this syndrome. Treatments may include: - Surgery to treat bleeding in some areas - Electrocautery (heating tissue with electricity) or laser surgery to treat frequent or heavy nosebleeds - Endovascular embolization (injecting a substance through a thin tube) to treat abnormal blood vessels in the brain and other parts of the body Some people respond to estrogen therapy, which can reduce bleeding episodes. Iron may also be given if there is a lot of blood loss leading to anemia. Avoid taking blood-thinning medicines. Some drugs that affect blood vessel development are being studied as possible future treatments. Some people may need to take antibiotics before having dental work or surgery. Ask your provider what precautions you should take. HHT Foundation International -- www.hht.org People with this syndrome can live a completely normal lifespan, depending on where in the body the AVMs are located. These complications can occur: When to Contact a Medical Professional Call your provider if you or your child has frequent nose bleeds or other signs of this disease. Genetic counseling is recommended for couples who want to have children and who have a family history of hereditary hemorrhagic telangiectasia. If you have this condition, medical treatments can prevent certain types of strokes and heart failure. Ferri FF, Ferri H. Osler-Rendu-Weber syndrome. In: Ferri FF, ed. Ferri's Clinical Advisor 2015. 1st ed. Philadelphia, PA: Elsevier Mosby; 2014:section I. McDonald J, Wooderchak-Donahue W, VanSant Webb C, Whitehead K, Stevenson DA, Bayrak-Toydemir P. Hereditary hemorrhagic telangiectasia: genetics and molecular diagnostics in a new era. Front Genet. 2015;6(1):1-8. - Last reviewed on 4/20/2015 - Chad Haldeman-Englert, MD, FACMG, Wake Forest School of Medicine, Department of Pediatrics, Section on Medical Genetics, Winston-Salem, NC. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
<urn:uuid:0c78aca0-7656-4f91-a038-0205e71e1279>
seed
Introduction/Causative Agents:Coccidiosis is a world wide contagious disease of sheep and goats, especially young lambs or kids. The disease is caused by one or more of approximately 12 different species of protozoa called Eimeria. These organisms parasitize and destroy cells lining the intestinal tract of the animal. Because each of the 12 or so coccidia species is completely independent from the others, with no cross immunity, an animal that is living with one type of coccidia may develop diarrhea when exposed to a different type. Good nutrition (including vitamin E-selenium supplementation in selenium deficient areas) also helps the animal defend itself against coccidiosis. Whether or not an animal gets sick with coccidiosis depends on several factors. One is the number of oocysts (eggs) swallowed at one time. Small exposures, frequently repeated, lead to immunity; while large exposures destroy all the intestinal cells at one time and may kill the lamb or kid. The age of the animal is also important, partly because the older animal has usually had time to develop some immunity, while the younger animal can be very vulnerable to disease. Immunity to coccidiosis in healthy adult animals is rarely complete, yet most of the intestinal cells in the adult are safe from invading coccidia. This means that a healthy looking adult sheep or goat can continue to pass oocysts in the fecal pellets. Disease Transmission and Life Cycle:An infected animal sheds thousands of microscopic coccidial oocysts (eggs) in its feces every day. When first passed, the oocysts are harmless to another animal. However, under favorable conditions of warmth and moisture, each oocyst matures (sporulates) in 1-3 days to form infective sporozoites. If a young lamb or kid swallows the sporulated oocyst, the sporozoites are released and rapidly penetrate the intestinal cells. From here on the life cycle gets very complicated. The coccidia pass through several periods of multiplication during which large schizonts are formed. The intestinal cells of the animal are destroyed and thousands of merozoites break out and invade other intestinal cells. Eventually, sexual stages are reached and new oocysts are produced. The entire life cycle of the protozoa, from oocyst to new oocyst, takes 2-3 weeks. Clinical Signs:If a young lamb or kid is suddenly exposed to many sporulated oocysts, it may become severely ill 1-2 weeks later. It will be off feed, listless, and weak. It may show abdominal pain by crying or getting up again as soon as it lies down. At first, the young animal might have a fever, but later the body temperature is normal or even below normal. Diarrhea begins pasty, then becomes watery, and the lamb or kid may dehydrate rapidly. In contrast to calves, lambs and kids only rarely have bloody diarrhea. Because the lactic acid produced by the digestion of milk helps to inhibit coccidia in the nursing young, signs often appear 2-3 weeks after the animals are weaned. Many of these signs are brought on by the stress of weaning or overcrowding. Young lambs or kids may be killed quickly by a severe attack of coccidiosis. The stronger or less heavily infected animals will develop a chronic disease characterized by intermittent diarrhea and poor growth. Tails and hocks are often dirty. Because the intestines have been severely damaged, the animal with chronic coccidiosis cannot digest its feed properly. As a consequence, such an animal will be a pot-bellied, poor-doer for months afterwards. Frequently, a stunted lamb or kid will be too small to breed its first winter. Even though coccidiosis is typically a disease of the young, growing lamb or kid, many adults are mildly infected and continuously shed oocysts which serve to infect the young lambs and kids. Occasionally, an adult sheep or goat shows temporary diarrhea when stressed or exposed to a new species of coccidia. This is especially common after the ewe or doe has been boarded on another farm for breeding. Diagnosis:Diagnosis of coccidiosis can be based on clinical signs and microscopic fecal exams. Coccidiosis is so common that it should be suspected whenever lambs or kids older than about 2 weeks of age are scouring. Sudden dietary changes or excessive food consumption can also cause diarrhea and make the animal more susceptible to coccidiosis. Diarrhea that begins with the consumption of too much milk, grain, or lush grass may drag on for days because of coccidiosis. Older lambs/kids and adults with diarrhea may have worms rather than coccidiosis, or they may have both problems together. Coccidia oocysts can be identified if fecal material is mixed with a concentrated sugar solution. The oocysts float to the top along with larger worm eggs. They are collected and examined with a microscope (see page D228). Oocysts may be shed in the feces as early as 10 days after a animal is infected, but often the first attack of diarrhea occurs before oocysts are available to be identified. In these cases, the trained technician can do a direct fecal smear to look for smaller merozoites that do not float in the sugar solution. If a lamb or a kid dies of coccidiosis, necropsy examination will quickly give the diagnosis. The small intestine will have many irregular raised white areas, often about 1/8 to 1/4 inch in diameter. A smear taken from these white spots will show many coccidial forms if examined using a microscope. Treatment:A variety of drugs may be given orally to treat the lamb or kid sick with coccidiosis. These include medications such as sulfamethazine, sulfadimethoxine, amprolium, and lasalocid. Usually, treatment is continued for about 5 days. Label and veterinary instructions should be followed because of the associated dangers if overdosed. If the diagnosis is not certain, and the young animal may have bacterial enteritis or pneumonia rather than coccidiosis, sulfamethazine or sulfadimethoxine is usually given instead of the others. All of these drugs are coccidiostats, which means that they slow down rather than kill the coccidia. Thus if a lamb or kid is very heavily infected when treatment is begun, the medication may not help that much. These drugs will, however, greatly reduce contamination of the environment and thereby, give other young animals time to develop immunity. Medicating older animals or adults will temporarily reduce the passage of oocysts, but will not improve growth rates. Within 2 or 3 weeks after medication is stopped, coccidia levels will return to pretreatment values. Thus, except for protection of younger lambs and kids, it is not justifiable to treat older, apparently healthy animals that do not have diarrhea. It is far better to separate the young animals from these older carriers. It is neither possible nor desirable to completely eradicate coccidia from the adult sheep and goats. Medication of apparently healthy lambs and kids is necessary on large farms with previous problems with coccidiosis. The aim is to prevent damage to the intestines rather than waiting for diarrhea to occur. For instance, it may help to treat the young animals with coccidiostats on a daily basis for a week or more before stressing them by weaning or by moving them onto pasture. In some herds/flocks, a drug such as amprolium may have to be given daily beginning at 2 weeks of age and continue until the young animals are several months old. Amprolium levels of 25-50 mg/kg daily should be used. This is approximately 10-20 mg per pound and is 2.5-5 times the treatment level recommended for calves. It can be given to each lamb or kid individually or can be mixed with the food or water. For example, if there are 50 pounds of small lambs or kids in a pen, 500 mg of amprolium is mixed with the water, milk or feed that they will consume in one day. The larger animals, by eating more, get more of the drug than do the smaller ones. Rumensin (monensin) at 15-30 grams per ton of feed in the starter grain has been found to eliminate the coccidiosis problems on some farms. This drug is very toxic to horses, so the medicated feed should not be left where a horse can eat it. Another potentially useful coccidiostat is lasalocid. This drug has protected lambs when used at 0.5-1.0 mg/kg/day. The poultry industry has found that the coccidia often become resistant to a drug after 1 or 2 years. Sheep and goat owners may also need to change drugs if the one in use ceases to be effective in controlling coccidiosis. Other coccidiostats may be mixed with the feed, but some of them have not yet been adequately tested on sheep and goats. No matter what medication is chosen, it is important to consult a local veterinarian before attempting to use any of these products. Many of the above products are used in extra label amounts and are not approved for use in sheep and goats in the United States outside of the proper veterinarian/client/patient/relationship (VCPR). Prevention: Prevention of coccidiosis is very important in larger flocks and herds if young lambs and kids are to thrive. Once diarrhea has developed, most of the damage to the intestine that leads to stunted growth has already occurred. Sick, young animals are treated to save their lives and to limit contamination of the pens, but at this stage, the owner has already lost control of this contagious disease. Several key facts are important to consider when developing a prevention program. The first is that adult animals are the original source of infection for young lambs and kids. The adults continually shed oocytes that contaminate the environment and younger animals. Because of this, all old bedding and feces should be removed from the lambing and kidding pens before the new lambs and kids are born. Sporulated oocysts are commonly present on the skin of the udder, and the young, suckling animal may become infected at the same time it takes its first drink of colostrum. To prevent this from happening, the female’s udder should be washed and dried before the young nurse, or the lamb or kid should be removed from its dam at once and bottle-fed the colostrum. If very few ewes and does are present on a farm and the pens are dry and spacious, coccidiosis is not apt to be a problem. The young may be safely left with their mothers. In larger flocks or herds, it is best to raise offspring completely separate from the adults until they are ready to breed. Fecal contamination of feed and water must be prevented. This means that feeders and waterers should be outside the pen whenever possible and arranged so that fecal pellets can not fall into them. Grain should be put in creep feeders rather than in open troughs where lambs and kids love to play and even sleep. Hay racks also should be covered to keep the young animals out. Because oocysts have to sporulate to become infective, exposure can be reduced by cleaning the pens daily. Slotted floors are helpful in this process. Because moisture is necessary for sporulation, concentrate on keeping the pens very dry and fixing leaking waterers at once. Small, grassy "exercise lots’’ are also very dangerous and should be used with caution. It is very important to avoid overcrowding; spreading the lambs or kids out decreases the number of oocysts on any given square inch of pen floor or pasture. If many lambs or kids are present on the same farm, they should be grouped by age. Putting a 2 week old kid into a pen with kids 2 months old, where coccidial numbers and immunity have been building up for some time, is to invite disaster for the newcomer. Oocysts are killed by very cold temperatures (far below zero) or by hot, dry conditions above 104° F. This means that at the end of the kidding/lambing season, pens and feeders could be moved out into the hot sunshine for natural sterilization. Ordinary disinfectants do not destroy oocysts and should not be relied on for control of coccidia.
<urn:uuid:e86eed21-eed2-4dc7-a236-6cc4f738ff3f>
seed
Using a discovery platform whose components range from yeast cells to human stem cells, Whitehead Institute scientists have identified a novel Parkinson's disease drug target and a compound capable of repairing neurons derived from Parkinson's patients. The platform -- whose effectiveness is described in dual papers published online this week in the journal Science -- could accelerate the discovery of drug candidates that address the underlying pathology of Parkinson's and other neurodegenerative diseases. Today, no such drugs exist. Parkinson's disease (PD) and such neurodegenerative diseases as Huntington's and Alzheimer's are characterized by protein misfolding, resulting in toxic accumulations of proteins in the cells of the central nervous system. Cellular buildup of the protein alpha-synuclein, for example, has long been associated with PD, making this protein a seemingly appropriate target for therapeutic intervention. In the search for compounds that might alter a protein's behavior or function -- such as that of alpha-synuclein -- drug companies often rely on so-called target-based screens that test the effect large numbers of compounds have on the protein in question in rapid, automated fashion. Though efficient, such an approach is limited by the fact that it essentially occurs in a test tube. Seemingly promising compounds emerging from a target-based screen may act quite differently when they're moved from the in vitro environment into a living setting. To overcome this limitation, the lab of Whitehead Member Susan Lindquist has turned to phenotypic screens in which candidate compounds are studied within a living system. In Lindquist's lab, yeast cells -- which share the core cell biology of human cells -- serve as living test tubes in which to study the problem of protein misfolding and to identify possible solutions. Yeast cells genetically modified to overproduce alpha-synuclein serve as robust models for the toxicity of this protein that underlies PD. "Phenotypic screens are probably underutilized for identifying drug targets and potential compounds," says Daniel Tardiff, a scientist in the Lindquist lab and lead author of one of the Science papers. "Here, we let the yeast tell us what is a good target. We let a living cell tell us what's critical for reversing alpha-synuclein toxicity." In a screen of nearly 200,000 compounds, Tardiff and collaborators identified one chemical entity that not only reversed alpha-synuclein toxicity in yeast cells, but also partially rescued neurons in the model nematode C. elegans and in rat neurons. Significantly, cellular pathologies including impaired cellular trafficking and an increase in oxidative stress, were reduced by treatment with the identified compound. Enabled by the chemistry provided by Nate Jui in the Buchwald lab at MIT, Tardiff found that the compound was working by restoring functions mediated by a cellular protein critical for trafficking that was previously thought to be "undruggable." But would these findings apply in human cells? To answer that question, husband-and-wife team Chee-Yeun Chung and Vikram Khurana led the second study published in Science to examine neurons derived from induced pluripotent stem (iPS) cells generated from Parkinson's patients. The cells and differentiated neurons (of a type damaged by the disease) were derived from patients that carried alpha-synuclein mutations and develop aggressive forms of the disease. To ensure that any pathology developed in the cultured neurons could be attributed solely to the genetic defect, the researchers also derived control neurons from iPS cells in which the mutation had been corrected. Chung and Khurana used the wealth of data from the yeast alpha-synuclein toxicity model to clue them in on key cellular processes that became perturbed as patient neurons aged in the dish. Strikingly, exposure to the compound identified via yeast screens in Tardiff's study reversed the damage in these neurons. "It was remarkable that the compound rescued yeast cells and patient neurons in similar ways and through the same target -- a target we would not have identified without yeast genetics to guide us," says Khurana, a postdoctoral scientist in the Lindquist lab and a neurologist at Massachusetts General Hospital who recruited patients for participation in this research. Khurana believes that the abnormalities discovered occur in the early stages of disease. If so, successful manipulation of the targets identified here might help slow or even prevent disease progression. For the researchers involved, these findings are a bit of surprise. Because neurodegenerative disorders like PD are largely diseases of aging, modeling them in a culture dish using neurons grown from iPS cells has been thought to be exceedingly difficult, if not impossible. "Many, ourselves included, were skeptical that we could find any important pathologies for a neurodegenerative disorder by reprogramming patient cells," says Chung, a Senior Research Scientist in the Lindquist lab. "Critically, we also validated these pathologies in post-mortem brains, so we're quite confident these are relevant for the disease." Next steps for these scientists include chemically optimizing the compound identified and testing it in animal models. Moreover, they are convinced that this yeast-human stem cell discovery platform could be applied to other neurodegenerative diseases for which yeast models have been developed. "Using yeast genetics to identify a compound and its mechanism of action against the fundamental pathology of a disease illustrates the power of the system we've built," says Lindquist, who is also professor of biology at MIT and a Howard Hughes Medical Institute investigator. "It's critical that we continue to leverage this power because as we reduce the rate at which people are dying from cancer and heart disease, the burden of these dreaded neurodegenerative diseases is going to rise. It's inevitable." This work was supported by the National Institutes of Health (grants 5R01GM069530, GM58160, K01 AG038546, 5 R01CA084198), Howard Hughes Medical Institute, the JPB Foundation, the Eleanor Schwartz Charitable Foundation, the Bachmann-Strauss Dystonia & Parkinson Foundation, the American Brain Foundation, and the Parkinson's Disease Foundation. - Daniel F. Tardiff, Nathan T. Jui, Vikram Khurana, Mitali A. Tambe, Michelle L. Thompson, Chee Yeun Chung, Hari B. Kamadurai, Hyoung Tae Kim, Alex K. Lancaster, Kim A. Caldwell, Guy A. Caldwell, Jean-Christophe Rochet, Stephen L. Buchwald, Susan Lindquist. Yeast Reveal a “Druggable” Rsp5/Nedd4 Network that Ameliorates α−Synuclein Toxicity in Neurons. Science, 2013 DOI: 10.1126/science.1245321 Cite This Page:
<urn:uuid:573bac53-d249-4453-9798-3323da6af72c>
seed
MADISON - A new technology that maps an organism's entire genome from single DNA molecules could ratchet up the race to decipher complex genomes, from food crops to human beings. Researchers report in the Friday, Sept. 3, issue of the journal Science their completion of the first whole genome assembled by a process called shotgun optical mapping. Scientists developed a physical map of Deinococcus radiodurans, a bacteria with the unusual ability to resist high levels of radiation. These new types of maps "may become an indispensable resource for large-scale genome sequencing projects," says David Schwartz, a professor of genetics and chemistry at the University of Wisconsin-Madison. Schwartz joined UW-Madison this summer from New York University in Manhattan, where he spent the past decade as part of a team of scientists developing the system. Schwartz says his laboratory is currently using optical mapping technology to map at high resolution the human genome, and predicts his process will reduce the amount of time required to achieve that monumental scientific goal. Optical mapping can be done in a fraction of the time it takes conventional DNA mapping or sequencing, Schwartz says. The usual approach is to decode the chemical base pairs of individual genes and gradually put them all together, one by one. Optical mapping provides an automated process to create a single, complete snapshot of a genome with very small amounts of material. Its advantages include the ability to analyze differences between individual genomes. By comparing maps of hundreds of individual human genomes, for example, scientists could pinpoint the origin of genetic diseases, understand the complexities of trait inheritance, or examine the dynamic process behind DNA repair. "The goal is to develop the ultimate data base of genetic information, and a source of analysis that will help us make sense out of the whole thing," Schwartz says. "What's nice about optical mapping is you can look at the whole genome, not just little snippets." One can think of optical mapping as an entire map of the United States, whereas conventional sequencing would be thousands of detailed maps of every city in the nation, he says. Optical mapping data works in concert with high-resolution DNA sequence data, linking both together in a complete and seamless description of a genome. Optical mapping begins by preparing DNA molecules on a glass surface. Normally rolled like a ball of yarn, Schwartz uses a flow between two surfaces to straighten the DNA. He then applies an enzyme to the prepared molecules that literally clips the molecular strands into tiny segments, producing landmarks that reveal important features of genome organization. Next, each segment of a DNA molecule can be measured and defined by an automated scanning technology that uses fluorescence microscopy. The process is repeated roughly 100 times in order to weed out errors and get overlapping results. Those measurements provide the raw material for the optical map. The laboratory already has completed maps of two other organisms and has another project to map the rice genome, an important milestone since rice is the most relied-upon food crop in the world. Schwartz says Jie-Yi Lin, his former NYU graduate student, was instrumental in the success of this project. Bud Mishra and Thomas Anantharaman, professors of computer science and mathematics at NYU, developed unique statistical and computational programs that helped overcome errors in the chemical outputs. Their contributions helped automate the process and make it more universally applicable to other genomes, Schwartz says. Owen White and Craig Venter of the Institute for Genomic Research recognized the value of the optical map and leveraged this data for their own sequencing efforts. Ken Minton and Michael Daly at Uniformed Services University of the Health Sciences used optical mapping data in their studies of how DNA repairs itself after damage. The D. radiodurans bacteria in Schwartz's study has long interested scientists. It was originally discovered in the 1950s thriving in canned meat that had been irradiated to supposedly kill bacteria. Because of its high resistance to radiation, the Department of Energy is interested in exploring its potential for naturally removing toxins from the environment. Federal sponsors include the National Institutes of Health, the National Science Foundation and the U.S. Department of Energy. Cite This Page:
<urn:uuid:7ff8e591-ef8e-4312-8753-116d3d9ad69d>
seed
Psychosis occurs when a person loses contact with reality. The person may: - Have false beliefs about what is taking place, or who one is (delusions). - See or hear things that are not there (hallucinations). Medical problems that can cause psychosis include: - Alcohol and certain illegal drugs, both during use and during withdrawal - Brain diseases, such as Parkinson disease, Huntington disease - Brain tumors or cysts - Dementia (including Alzheimer disease) - HIV and other infections that affect the brain - Some prescription drugs, such as steroids and stimulants - Some types of epilepsy Psychosis may also be found in: - Most people with schizophrenia - Some people with bipolar disorder (manic-depressive) or severe depression - Some personality disorders A person with psychosis may have any of the following: - Disorganized thought and speech - False beliefs that are not based in reality (delusions), especially unfounded fear or suspicion - Hearing, seeing, or feeling things that are not there (hallucinations) - Thoughts that "jump" between unrelated topics (disordered thinking) Exams and Tests Psychiatric evaluation and testing are used to diagnose the cause of the psychosis. Laboratory testing and brain scans may not be needed, but sometimes can help pinpoint the diagnosis. Tests may include: - Blood tests for abnormal electrolyte and hormone levels - Blood tests for syphilis and other infections - Drug screens - MRI of the brain Treatment depends on the cause of the psychosis. Care in a hospital is often needed to ensure the patient's safety. Antipsychotic drugs, which reduce hallucinations and delusions and improve thinking and behavior, are helpful. How well a person does depends on the cause of the psychosis. If the cause can be corrected, the outlook is often good. In this case, treatment with antipsychotic medication may be brief. Some chronic conditions, such as schizophrenia, may need life-long treatment with antipsychotic medications to control symptoms. Psychosis can prevent people from functioning normally and caring for themselves. Left untreated, people can sometimes harm themselves or others. When to Contact a Medical Professional Call your health care provider or mental health professional if you or a member of your family is losing contact with reality. If there is any concern about safety, take the person to the emergency room to be seen by a doctor. Prevention depends on the cause. For example, avoiding alcohol abuse prevents alcohol-induced psychosis. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Freudenreich O, Weiss AP, Goff DC. Psychosis and schizophrenia. In: Stern TA, Rosenbaum JF, Fava M, et al., eds. Massachusetts General Hospital Comprehensive Clinical Psychiatry. 1st ed. Philadelphia, Pa: Elsevier Mosby; 2008:chap 28. Last reviewed 2/24/2014 by Fred K. Berger, MD, Addiction and Forensic Psychiatrist, Scripps Memorial Hospital, La Jolla, California. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team. - The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. - A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. - Call 911 for all medical emergencies. - Links to other sites are provided for information only -- they do not constitute endorsements of those other sites.
<urn:uuid:df82ea3b-844a-4955-929c-09183ec82fd8>
seed
What condition is present? Who is at risk? This is glaucoma. Older persons, black race, history of diabetes mellitus, and history of corticosteroid use increase the risk for glaucoma. How is this condition detected? Methods include tonometry (measuring intraocular pressure), funduscopy (viewing the retina; also called ophthalmoscopy), and perimetry (measuring visual fields). Tonometry is often performed with a device that is "non-contact" with the eye, using a puff of air to determine intraocular pressure. A more sensitive and exact measurement can be taken with a Goldmann applanation tonometer mounted on a slit lamp. Gonioscopy, utilizing a goniolens, can be used to view the angle of the anterior chamber to determine if "open angle" or "closed angle" glaucoma is present. However, not all persons with increased intraocular pressure will develop glaucoma, and not all persons with glaucoma have a measurable increase in intraocular pressure. Thus, it is recommended to screen patients by funduscopy and ophthalmoscopy (slit-lamp exam) as well as for pressure. Testing of visual fields (perimetry) may be indicated. How does this condition occur? There are several ways that intraocular pressure can be increased. In most cases the process is generally slow and painless, so the affected person is not often aware of the condition until substantial visual problems have occurred. Some cases present more acutely. The drainage of the aqueous humor in the anterior and posterior chambers of the eye occurs at the angle of the ciliary body with the cornea, where a trabecular meshwork of veins leads to the canal of Schlemm. Eyes that are small and hyperopic (farsighted) have a narrower angle that reduces resorbtion of fluid, leading to the so-called "primary angle-closure glaucoma." There is a propensity for an acute exacerbation marked by intense pain and redness of the eye, and the perception of halos around lights, as if looking through a steamy shower door. This is an emergency that must be immediately treated. For closed angle glaucoma, iridotomy introduces a small hole through the iris to allow drainage of aqueous. However, most cases of glaucoma (about 1 in 100 persons under age 65 and up to 1 in 25 persons over age 75) are of the "primary open-angle glaucoma" type in which there is no perceptible anatomic abnormality, but decreased resorbtion of aqueous occursfor an unknown reason. Persons with open angle glaucoma tend to lose peripheral vision first and have a sensation that everything appears darker. In the U.S. persons of African-American ancestry develop this disorder more often than caucasians. Much rarer are congenital glaucoma and glaucoma due to increased produced of aqueous. Signs of congenital glaucoma in an infant may include increased rubbing of eyes and a hazy corneal appearance. If a study were performed comparing drug treatments for this condition, how could one avoid type II error and what would determine the "power" of the study? In comparing two sets of values from population groups, one can make the assumption that they will be the same. This is called the "null hypothesis". For most statistical studies the goal is to show that the null hypothesis is unlikely, so a difference which is greater than the limits set, and which we therefore regard as "significant", will make the null hypothesis unlikely. To reject the null hypothesis when it is true is to make what is known as a type I error, or "alpha" error (a false positive). The level at which a result is declared significant is known as the type I error rate, often denoted by alpha. This is defined in relation to a threshold "P" value. The typical P value for alpha is usually <0.05, or less than 5% of the time when performing the experiment one would expect results for the measured groups to be similar. If the null hypothesis is not rejected when there is a real difference between the groups, then this is known as a type II error, or "beta" error (a false negative). The "power" of a study is its ability to find a significant difference in the populations studied. All studies have low power to find small differences and high power to find large differences among persons in a study. Thus, if the differences in intraocular pressure measured with and without therapy are not great, then the study will not have as much power as when the measured differences are great. This can be offset by increasing the number of subjects in the study: the more subjects, the smaller the measured differences need to be to detect a statistical significance. For a patient with significant visual impairment who continues to operate a motor vehicle, what should you do? The best thing to do is tell the patient to be tested by the department of motor vehicles (DMV) and that this is important for their safety as well as others. State that their safety is part of your responsibility so if they don't contact the DMV, you will. Typical visual requirements for operating a motor vehicle are: 20/70 vision in either eye, or both eyes together, with or without corrective lenses. If one eye is 20/200 or worse, the other eye must be 20/40 or better, with or without a corrective lens. 130 degrees is the minimum acceptable field of vision (normal is 180 degrees). The types of problems that can impair ability to operate a motor vehicle include: seizure disorders or blackouts, dizzy spells, severe cardiovascular problems, memory or judgment impairment, drug or alcohol problems, progressive neurological disorders, severe psychiatric disorders, visual impairments, sleep disorders, poorly controlled diabetes mellitus, and severe head injuries.
<urn:uuid:e4c4ea38-b63e-4302-9573-48dd7266cc8d>
seed
July 23, 2003 MEDIA CONTACT: Trent Stockton Common Treatment for Depression is Safe and Effective for Alzheimer's Patients Researchers at Johns Hopkins have shown that a drug, Zoloft, commonly used for depression, also improves quality of life and alleviates disruption in daily activities for the one-quarter of Alzheimer's patients who also suffer from major depression. However, the drug did not improve patients' cognitive abilities, such as thinking, remembering and learning, which are often impaired in Alzheimer's disease patients. "Depression in Alzheimer's patients, and even Alzheimer's disease itself, often goes undiagnosed, in part because doctors feel they have little to offer in the form of treatment. This study shows that a simple treatment for depression improves the quality of life and seems to slow the functional decline of Alzheimer's disease," says Constantine Lyketsos, M.D., professor of psychiatry and behavioral sciences at Johns Hopkins and lead author of the report appearing in the July 2003 issue of the Archives of General Psychiatry. The drug, called sertraline hydrochloride or Zoloft, is a common treatment for psychiatric diseases such as major depression, obsessive compulsive disorder and panic disorder. Major depression affects 25 percent of patients with Alzheimer's disease, and when combined with the cognitive impairment of Alzheimer's, is extremely disabling and can lead to death or suicide, says Lyketsos. "This simple and safe treatment for depression has tremendous potential for improving the quality of life for both Alzheimer's patients and their caregivers," said Lyketsos. The study included adult participants with both Alzheimer's disease and major depression. All patients and their caregivers were educated about the illnesses and received encouragement and emotional support throughout the study. All patients were rated on a standardized depression scale and given a single placebo pill daily for one week in order to identify those with transient or temporary depression. Those patients with a drop of 30 percent or more in their depression scores were excluded from the study. The remaining 44 patients were assigned randomly to receive placebo or sertraline once a day for 12 weeks. Patients were seen in the clinic every three weeks for 12 weeks following the study. The results show that 84 percent of those receiving the drug were positively influenced, versus 35 percent in the placebo group. The researchers found that treating depression was accompanied by lessened behavioral disturbance and improved activities of daily living as well. Based on the results, Lyketsos and his team are leading a multicenter clinical trial to investigate the long-term benefits of sertraline for patients with Alzheimer's and to determine how well the treatment eases the burden of caregivers. According to the National Institute on Aging, up to 4 million Americans suffer from Alzheimer's disease. The disease usually begins after age 60, and risk goes up with age. While younger people also may get AD, it is much less common. About 3 percent of men and women ages 65 to 74 have AD, and nearly half of those age 85 and older may have the disease. The study was funded by the National Institute of Mental Health. Other authors are Lourdes DelCampo, Martin Steinberg, Quincy Miles, Cynthia Steele, Cynthia Munro, Alva Baker, Jeannie-Marie Sheppard, Constantine Frangakis, Jason Brandt and Peter Rabins, all from Johns Hopkins. On the Web: Dr. Lyketsos has been or is a consultant and advisor for the following companies: Astra-Zeneca Pharmaceuticals LP, E.I. duPont de Nemours and Company, Eli Lilly and Company, Janssen Pharmaceutica, NeuroLogic Inc, and Pfizer Inc. He has been or is a speaker for the following: Abbott Laboratories, Bayer Corporation, Bristol-Meyers-Squibb, E.I. duPont de Nemours and Company, Eisai Ltd, Forest Laboratories, Janssen Pharmaceutica, Novartis Pharmaceuticals USA, Parke-Davis (Warner-Lambert), and Pfizer Inc. He has received or receives research support from the following: Abbott Laboratories, Bayer Corporation, Bristol-Meyers-Squibb, Eisai Ltd, Eli Lilly and Company, Janssen Pharmaceutica, NeuroLogic Inc, Parke-Davis Company, and Pfizer Inc.
<urn:uuid:18d33b2d-0f38-44ff-be9c-49c293c4f419>
seed
You are reading a NORD Rare Disease Report Abstract. NORD’s full collection of reports on over 1200 rare diseases is available to subscribers (click here for details). We are now also offering two full rare disease reports per day to visitors on our Web site. Synonyms of Diastrophic Dysplasia - Diastrophic Dwarfism - Diastrophic Nanism Syndrome - No subdivisions found. Diastrophic dysplasia, which is also known as disastrophic dwarfism, is a rare disorder that is present at birth (congenital). The range and severity of associated symptoms and physical findings may vary greatly from case to case. However, the disorder is often characterized by short stature and unusually short arms and legs (short-limbed dwarfism); abnormal development of bones (skeletal dysplasia) and joints (joint dysplasia) in many areas of the body; progressive abnormal curvature of the spine (scoliosis and/or kyphosis); abnormal tissue changes of the outer, visible portions of the ears (pinnae); and/or, in some cases, malformations of the head and facial (craniofacial) area. In most infants with diastrophic dysplasia, the first bone within the body of each hand (first metacarpals) may be unusually small and "oval shaped," causing the thumbs to deviate away (abduction) from the body ("hitchhiker thumbs"). Other fingers may also be abnormally short (brachydactyly) and joints between certain bones of the fingers (proximal interphalangeal joints) may become fused (symphalangism), causing limited flexion and restricted movement of the finger joints. Affected infants also typically have severe foot deformities (talipes or "clubfeet") due to abnormal deviation and fusion of certain bones within the body of each foot (metatarsals). In addition, many children with the disorder experience limited extension, partial (subluxation) or complete dislocation, and/or permanent flexion and immobilization (contractures) of certain joints. In most infants with diastrophic dysplasia, there is also incomplete closure of bones of the spinal column (spina bifida occulta) within the neck area and the upper portion of the back (lower cervical and upper thoracic vertebrae). In addition, during the first year of life, some affected children may begin to develop progressive abnormal sideways curvature of the spine (scoliosis). During adolescence, individuals with the disorder may also develop abnormal front-to-back curvature of the spine (kyphosis), particularly affecting vertebrae within the neck area (cervical vertebrae). In severe cases, progressive kyphosis may lead to difficulties breathing (respiratory distress). Some individuals may also be prone to experiencing partial dislocation (subluxation) of joints between the central areas (bodies) of cervical vertebrae, potentially resulting in spinal cord injury. Such injury may cause muscle weakness (paresis) or paralysis and/or life-threatening complications. In addition, most newborns with diastrophic dysplasia have or develop abnormal fluid-filled sacs (cysts) within the outer, visible portions of the ears (pinnae). Within the first weeks of life, the pinnae become swollen and inflamed and unusually firm, thick, and abnormal in shape. Over time, the abnormal areas of tissue (lesions) may accumulate deposits of calcium salts (calcification) and eventually develop into bone (ossification). Some affected infants may also have abnormalities of the head and facial (craniofacial) area including incomplete closure of the roof of the mouth (cleft palate) and/or abnormal smallness of the jaws (micrognathia). In addition, in some affected infants, abnormalities of supportive connective tissue (cartilage) within the windpipe (trachea), voice box (larynx), and certain air passages in the lungs (bronchi) may result in collapse of these airways, causing life-threatening complications such as respiratory obstruction and difficulties breathing. In some individuals with the disorder, additional symptoms and physical findings may also be present. Diastrophic dysplasia is inherited as an autosomal recessive trait. Diastrophic Dysplasia Resources NORD Member Organizations: (To become a member of NORD, an organization must meet established criteria and be approved by the NORD Board of Directors. If you're interested in becoming a member, please contact Susan Olivo, Membership Manager, at [email protected].) The information in NORD’s Rare Disease Database is for educational purposes only. It should never be used for diagnostic or treatment purposes. If you have questions regarding a medical condition, always seek the advice of your physician or other qualified health professional. NORD’s reports provide a brief overview of rare diseases. For more specific information, we encourage you to contact your personal physician or the agencies listed as “Resources” on this report. The National Organization for Rare Disorders (NORD) web site, its databases, and the contents thereof are copyrighted by NORD. No part of the NORD web site, databases, or the contents may be copied in any way, including but not limited to the following: electronically downloading, storing in a retrieval system, or redistributing for any commercial purposes without the express written permission of NORD. Permission is hereby granted to print one hard copy of the information on an individual disease for your personal use, provided that such content is in no way modified, and the credit for the source (NORD) and NORD’s copyright notice are included on the printed copy. Any other electronic reproduction or other printed versions is strictly prohibited. Copyright 1987, 1989, 1997, 1998, 1999, 2001, 2003, 2007 NORD's Rare Disease Information Database is copyrighted and may not be published without the written consent of NORD.
<urn:uuid:5dbc6d05-32f9-4be7-9efa-06e88e07f7c2>
seed
Cholesterol and Triglycerides Tests Cholesterol and triglyceride tests are blood tests that measure the total amount of fatty substances (cholesterol and triglycerides) in the blood. Results are usually available within 24 hours. Your cholesterol levels can help your doctor find out your risk for having a heart attack or stroke. But it's not just about your cholesterol. Your doctor uses your cholesterol levels plus other things to calculate your risk. These include: You and your doctor can talk about whether you need to lower your risk and what treatment is best for you. If your LDL cholesterol is 190 milligrams per deciliter (mg/dL) or more, it might mean that you have a familial lipid disorder. For children and teens, test results are slightly different than for adults. For more information, see Cholesterol in Children and Teens. What Affects the Test Many conditions can affect cholesterol and triglyceride levels. Your doctor will talk with you about any abnormal results that may be related to your other health problems. Reasons you may not be able to have the test or why the results may not be helpful include: - Medicines, such as diuretics, corticosteroids, male sex hormones (androgens), tranquilizers, estrogen, birth control pills, antibiotics, and niacin (vitamin B3). - Physical stress, such as infection, heart attack, surgery. - Eating 9 to 12 hours before the test. - Other conditions, such as hypothyroidism, diabetes, or kidney or liver disease. - Alcohol or drug abuse or withdrawal. - Liver disease (such as cirrhosis or hepatitis), malnutrition, or hyperthyroidism. Pregnancy. Values are the highest during the third trimester and usually return to the prepregnancy levels after delivery of the baby. What To Think About - Chylomicrons are another type of lipoprotein that are measured in a different test. Chylomicrons are in the blood and carry fat from your intestine to your liver. They carry triglycerides to your muscles for immediate use. Or they carry triglycerides to fat tissue for storage. - Cholesterol screening is often available in supermarkets, pharmacies, shopping malls, and other public places. Home cholesterol testing kits also are available. The results of tests done outside a doctor's office or lab may not be accurate. If you have cholesterol screening done outside your doctor's office, talk with your doctor about the accuracy of the results.
<urn:uuid:50329299-5ae9-4ee5-987a-1b1b46ae0fce>
seed
SEER is an authoritative source of information on cancer incidence and survival in the United States. SEER currently collects and publishes cancer incidence and survival data from population-based cancer registries covering approximately 28 percent of the U.S. population. Sezary syndrome is a generalized mature T-cell lymphoma characterized by the presence of erythroderma, lymphadenopathy and neoplastic T-lymphocytes in the blood. The neoplastic T-cells have cerebriform nuclei, and the disease is by tradition regarded as a variant of Mycosis Fungoides. However, the behavior is usually much more aggressive. Sezary syndrome (SS) is defined by the triad of erythroderma, generalized lymphadenopathy, and the presence of clonally related neoplastic T-cells with cerebriform nuclei (Sezary cells) in skin, lymph nodes, and peripheral blood. The histologic features in SS may be similar to those in Mycosis Fungoides; however, the cellular infiltrates in SS are more often monotonous, and epidermotropism may sometimes be absent. SS is a leukemia and thus, by definition, a generalized disease. All visceral organs may be involved in the terminal stages, however, there is often a remarkable sparing of the bone marrow. This is an aggressive disease with an overall survival rate at 5 years of 10-20%. Most patients die of opportunistic infections. Prognostic factors include the degree of lymph node and peripheral blood involvement.
<urn:uuid:6ff8e94e-2447-4087-b5d6-36789380e822>
seed
Chapter 9 - Sensory system evaluation The evaluation of somatic sensation, or any sensory modality for that mattter, is highly dependent on the ability and desire of the patient to cooperate. Sensation belongs to the patient (i.e., is subjective) and the examiner must therefore depend almost entirely on their reliability. For example, a patient with dementia or a psychotic patient is likely to give only the crudest, if any, picture of their perception of sensory stimuli. An intelligent, stable patient may refine asymmetries of stimulus intensity to such a degree that insignificant differences in sensation are reported, only confusing the picture. Suggestion can also modify a subject's response to a marked degree (e.g., to ask a patient where a stimulus changes suggests that it must change and may therefore create false lines of demarcation in an all too cooperative patient). Obviously the examiner must not waste time and efficiency on detailed sensory testing of the psychotic or demented patient, and must warn even the most cooperative patient that minute differences requiring more than a moment to decipher are probably of no significance. Additionally, the examiner must avoid any hint of predisposition or suggestion. Nonetheless, even after all precautions are taken, problems with the sensory exam still arise. Uniformity in testing is almost impossible and there is considerable variability of response in the same patient. Fatigue can be an additional confounding variable and is particularly likely to be induced by a long, detailed and tedious sensory examination. A rapid, efficient exam is the most practical means of diminishing fatigue. Use of a pressure transducer, such as VonFrey monofilaments, allows more consistent stimulus intensities and therefore more objectivity in the examination; however, this is impractical at the bedside and does not eliminate patient variability. Sensory changes that are unassociated with any other abnormalities (i.e., motor, reflex, cranial, hemispheric dysfunctions) must be considered weak evidence of disease unless a pattern of loss in a classical sensory pattern is elicited (for example, in a typical pattern of peripheral nerve or nerve root distribution). Therefore, one of the principle goals of the sensory exam is to identify meaningful patterns of sensory loss (see below). Bizarre patterns of abnormality, loss, or irritation usually indicate hysteria or simulation of disease. However, the examiner must be aware of their own personal limitations. Peripheral nerve distributions vary considerably from individual to individual, and even the classic distributions are hard to keep in mind unless one deals with neurologic problems frequently. Therefore, it is advisable for the examiner to carry a booklet on peripheral nerve distribution, sensory and motor (such as: Aids to the Examination of the Peripheral Nervous System, published by the Medical Council of the U.K.). As in all components of the examination, an efficient screening exam must be developed for sensory testing. This should be more detailed when abnormalities are suspected or detected or when sensory complaints predominate. Basic testing should sample the major functional subdivisions of the sensory systems. The patient's eyes should be closed throughout the sensory examination. The stimuli should routinely be applied lightly and as close to threshold as possible so that minor abnormalities can be detected. Spinothalamic (pain, temperature, and light touch), dorsal column (vibration, proprioception, and touch localization), and hemispheric (stereognosis, graphesthesia) sensory functions should be screened. Pain (using a pin or toothpick), vibration (using a C128 tuning fork), and light touch should be compared at distal and proximal sites on the extremities, and the right side should be compared with the left. Proprioception should be tested in the fingers and toes and then at larger joints if losses are detected. Stereognosis, the ability to distinguish objects by feel alone, and graphesthesia, the ability to decipher letters and numbers written on skin by feel alone, should be tested in the hands if deficits in the simpler modalities are minor or absent. Significant defects in graphesthesia and stereognosis occur with contralateral hemispheric disease, particularly in the parietal lobe (since this is the somatosensory association area that interprets sensation). However, any significant deficits in the basic sensory modalities cause dysgraphesthesia and stereognostic difficulties whether the lesion or lesions are peripheral or central. Therefore, it is difficult or impossible to test cortical sensory function when there are deficits of the primary sensory functions. It may be surprising that the more basic modalities are usually not greatly affected by cortical lesions. With acute hemispheric insults (e.g., cerebral infarction or hemorrhage), an almost complete contralateral loss of sensation may occur. It is relatively short-lived, however; perception of pin-prick and light touch, as routinely tested, returns to almost normal levels, whereas proprioception and vibration may remain deficient (though considerably improved) in most cases. This lack of a significant long-term deficiency in basic sensation following hemispheric lesions has no completely satisfactory explanation, although some basic sensations probably have considerable bilateral projection to the hemispheres. Double simultaneous stimulation (DSS) is the presentation of paired sensory stimuli to the two sides simultaneously. This can be visual, aural or tactile. Light touch stimuli presented rapidly, simultaneously, and at minimal intensity to homologous areas on the body (distal and proximal samplings on extremities) may pick up very minor threshold differences in sensation. Additionally, this testing can also detect neglect phenomena due to damage of the association cortex. Neglect may be hard to distinguish from involvement of the primary sensory systems. However, neglect usually can be demonstrated in multiple sensory systems (i.e., visual, auditory, and somesthetic), confirming that this is not simply damage to one sensory system. Association cortex lesions, particularly involvement of the right posterior parietal cortex, may become apparent only on double simultaneous stimulation. The face-hand test is a further modification of DSS. This test takes advantage of the fact that stimuli delivered to the face dominate over stimulation elsewhere in the body. This dominance is best illustrated in children and in demented and therefore regressed patients. Before the age of ten, most strikingly earlier than age five, stimuli presented simultaneously to the face and ipsilateral or contralateral hand are frequently (more than three in ten stimulations) perceived at the face alone. Perception of the hand, and, if tested, other parts of the body is extinguished. In an older child or adult, several initial extinctions of the hand may occur, but very quickly both stimuli are correctly perceived. In the patient with diffuse hemispheric dysfunction (dementia), a regression to consistent bilateral extinction of the hand stimuli is frequently seen. This test therefore can be doubly useful, first as an indication of diffuse hemispheric function and second by stimulating the face and opposite hand, a means of detecting minor hemisensory defects (e.g., if the patient consistently extinguishes only the right hand and not the left, a sensory threshold elevation due to primary sensory system or association cortex involvement on the left is suspect). Since the main goal of the sensory exam is to determine which, if any, components of the sensory system are damaged, it is important to consider the principle patterns of sensory loss resulting from disease of the various levels of the sensory system. These patterns of loss are based on the functional anatomy and we will also briefly review some of this anatomy. Peripheral neuropathy, that is, symmetrical damage to peripheral nerves, is a relatively common disorder that has many causes. Most of these can broadly be classified as toxic, metabolic, inflammatory or infectious. In this country, the most common causes are diabetes mellitus and the malnutrition of alcoholism, although other nutritional deficiencies or toxic exposures (either environmental toxins or certain medicines) are occasionally seen. Infections, such as Lyme disease, syphilis, or HIV can cause this pattern and there are inflammatory and autoimmune conditions that can also produce this pattern of damage. A more complete discussion can be found in Chapter 21. Because this is a systemic attack on peripheral nerves, the condition produces symmetrical symptoms. The initial symptoms are most often sensory and the longest nerves are affected first (the ones that are most exposed to the toxic or metabolic insult). The receptors of the feet are considerably farther removed from their cell bodies in the dorsal root ganglia than are the receptors of the hands. The metabolic demands on these neurons is substantial which accounts for their being the first affected and for the early appearance of sensory loss in the feet in a "stocking" distribution. Later on, as the symptoms reach the mid-calf, the fingers are involved and a full "stocking-glove" loss of sensation develops. Even later, when the trunk begins to be involved, sensory loss is noted first along the anterior midline (Figure 9-1). Vibration perception is often the earliest affected modality since these are the largest, most heavily myelinated and most metabolically demanding fibers. Usually the loss of pin-prick, temperature, and light-touch perception follow, with conscious proprioception (joint position sense) being variably affected. Despite the fact that proprioception follows many of the same pathways as vibration it is usually not as noticeably affected because the testing procedure (i.e., moving the toes or fingers up or down) is quite crude and is not likely to pick up early loss. The peripheral deep-tendon reflexes are depressed early in most cases of peripheral neuropathy, particularly the Achilles reflex. This is because the sensory limb of this reflex depends on large myelinated fibers. As a rule, symptomatic motor involvement is late and, when it occurs, it affects the intrinsic muscles of the feet first. Radiculopathy (nerve root damage) is the relatively common result of intervertebral disc herniation or pressure from narrowing of the intervertebral foramina due to spondylosis (arthritis of the spine). The most common presentation of this is sharp, shooting pain along the course of the nerve root (Figure 9-2). Damage to a single nerve root, even when severe, usually does not have any sensory loss because of the striking overlap of dermatomal sensory distribution (Figure 9-3). There may be slight loss, often accompanied with paresthesias (tingling or pins and needles) in small areas of the distal limbs where the sensory overlap is not great. Table 21-3 lists some of the common areas of paresthesia or decreased sensation with common nerve root injuries. Herpes zoster, which affects individual dorsal roots, nicely demonstrates dermatomal distribution because, despite the lack of sensory loss (attributable to overlap), vesicles ("shingles") appear at the nerve endings in the skin (see Figure 9-3). Nerve root damage in the cauda equina often produces a "saddle" distribution of sensory loss by affecting the lower sacral nerve roots. This saddle distribution of sensory loss can also be seen in anterior spinal cord damage (see the next section) and, in either case, must be taken quite seriously due to the potentially serious sequellae of spinal cord and cauda equina damage. Nerve root pain is often quite characteristic. It is often quite sharp and well localized to the dermatomal distribution and may be brought on by stretching of the nerve root (Figure 1-5) or by maneuvers that load the intervertebral discs and compress the intervertebral foramina (Figure 1-4). However, pain can also "refer" (see Chapter 19). This referred pain is less localized and is often felt in the muscles (myotomal) or skeletal structures (sclerotomal) that are innervated by the nerve root. The person usually complains of a deep aching sensation. Myotomes should not be memorized but can be looked up easily by referring to the motor root innervations of muscles, which are essentially the same as their sensory innervations. Sclerotomal overlap is so great that localization on their basis is impractical. Spinal cord damage is characterized by both sensory and motor symptoms, both at the level of involvement, as well as below, by affecting the tracts running through the cord. Symptoms referable to the level of injury appear in the pattern of dermatomes and myotomes and, when present, are very useful for localizing the level of spinal cord damage. The symptoms of damage to the long sensory tracts (the dorsal columns and the spinothalamic tract) are less helpful in localizing the lesion because it is often impossible to determine the precise level of the sensory loss and also because, particularly in the case of the spinothalamic tract, there is considerable dissemination of the signal in the spinal cord before it is relayed up the cord. Similar difficulties make it difficult to localize the level of spinal cord damage by examining for damage to the descending (corticospinal) motor tracts. Therefore, when long tract damage is identified, one can only be certain that the lesion is above the highest level that is demonstrably affected. Compression of the spinal cord from the anterior side first involves the spinothalamic paths from the sacral region, and a "saddle" loss of pain and temperature perception is usually the first symptom even with lesions high in the spinal cord (Figure 9-4). In this case, as symptoms progress with greater degrees of compression, symptoms progressively ascend the body up toward the level of the actual cord damage (see Figure 9-4). Intramedullary lesions of the spinal cord (such as syrinx, ependymoma, or central glioma) may present with a very unusual pattern of "suspended sensory loss". This consists of an isolated loss of pain and temperature perception in the region of the expanding lesion because of damage to the crossing spinothalamic tract fibers (Figure 9-5). In this pattern of sensory loss due to expanding intramedullary lesions, there is "sacral sparing" of pain and temperature because the more peripheral spinothalamic fibers (the ones from the sacrum) are the last to be involved (see Figure 9-4). With intramedullary lesions, the dorsal columns are also usually spared until extremely late in the course of expansion, leaving touch, vibration, and proprioception intact. The loss of one or two sensory modalities (such as pain and temperature sense, in this case) with preservation of others (such as touch, vibration and joint position sense) is termed a "dissociated sensory loss" and is in contrast to the loss of all sensory modalities associated with major nerve or nerve root lesions or with complete spinal cord damage. Complete hemisection of the cord is seen occasionally in clinical practice and is quite illustrative of the course of spinal cord sensory pathways. This lesion results in a characteristic picture of sensorimotor loss (Brown-Sequard syndrome), which is easily recognized due to the loss of dorsal columns sensations (vibration, localized touch, joint position sense) on the ipsilateral side of the body and of spinothalamic sensations (pain and temperature) on the contralateral side (Figure 9-6). Brain stem involvement, like involvement of the spinal cord, is characterized by long-tract and segmental (cranial nerve) motor and sensory abnormalities and is localized by the segmental signs. The picture of ipsilateral cranial nerve abnormality and contralateral long-tract dysfunction is quite consistent (Figure 9-7). Both the dorsal columns and pyramids decussate at the spinomedullary junction (the spinothalamic system has already decussated in the spinal cord). This accounts for the typical crossed presentation of symptoms in the body. Below the level of the midbrain, the spinothalamic and dorsal column (medial lemniscus) systems remain separate and therefore lesions may involve the pathways separately (i.e., there may be a dissociated sensory loss). For example, an infarction caused by occlusion of the posterior inferior cerebellar artery typically involves only the lateral portion of the medulla. The ipsilateral trigeminal tract and nucleus and the spinothalamic tract are frequently included in the lesion, leaving a loss of pain and temperature perception over the ipsilateral face (see Chapter 5) and the contralateral side of the body from the neck down. The medial lemniscus and its modalities (i.e., vibration, joint position, and well-localized touch) are spared. Thalamic lesions are associated with contralateral hemihypesthesia. Initially, if the lesion is acute, there is considerable loss bordering on anesthesia, but some recovery is expected over time, especially of touch, temperature, and pain perception. Vibration and proprioception remain more severely affected. Unfortunately, episodic paroxysms of contralateral pain may be a striking and not infrequent residual of thalamic destruction (this is one of the "central pain syndromes"). The pain can be controlled occasionally with anticonvulsants. An additional residual that may develop over time is marked contralateral hyperpathia in spite of the presence of diminished overall sensitivity of the skin. Stimulation of a site with a pin causes a very unpleasant, poorly localized and spreading sensation, which is frequently described as burning. This is presumably an irritative phenomenon of the nervous system, although it may also result from loss of normal pain-suppression mechanisms. It is seen most often after thalamic lesions, although it can occur as a residual of lesions in any portion of the central sensory systems. A hypersensitivity to cold sensation frequently accompanies the hyperpathia. As discussed earlier, cortical lesions tend to leave minimal deficits in basic sensation. However, especially if the parietal lobe is damaged, there may be striking contralateral deficits in the higher perceptual functions (see Chapter 2). Stereognosis and graphesthesia are abnormal in spite of minor difficulties with vibration and proprioception and even less, if any, difficulty with pain, temperature, and light-touch perception. Of course, if there is significant deficit of primary sensations, it may be impossible to test for deficits of higher perceptual functions. - Brodal, A.: Neurological Anatomy in Relation to Clinical Medicine, ed.2, New York, Oxford University Press, 1969. - Medical Council of the U.K.: Aids to the Examination of the Peripheral Nervous System. Palo Alto, Calif., Pendragon House, 1978. - Monrad-Krohn, GH, Refsum, S.: The Clinical Examination of the Nervous System, ed. 12, London, H.K. Lewis & Co., 1964. - Wolf, J.: Segmental Neurology, Baltimore, University Park Press, 1981. Define the following terms:conscious proprioception, agnosia (stereoagnosia), graphesthesia, dermatome, sclerotome, myotome, radiculopathy, myelopathy, anesthesia/hypoesthesia, hyperpathia, allodynia, hyperesthesia, dysesthesia, paresthesia, polyneuropathy, subjective. 9-1. What are the steps involved in the sensory exam? 9-2. How is it possible to lose some types of sensations and not others? 9-3. What sensations are conveyed by the small-diameter sensory nerve fibers in a peripheral nerve? 9-4. What sensations are conveyed by large-diameter sensory nerve fibers in a peripheral nerve? 9-5. What sensations are conveyed by the dorsal columns? 9-6. What sensations are conveyed by the spinothalmic tract? 9-7. What is tested by double simultaneous stimulation? 9-8. Where would the lesion be if the patient was able to detect all modalities of sensation but could not recognize an object placed in the right hand? 9-9. What is the common sensory loss from damage to the spinal cord? 9-10. What would be the expected sensory loss from damage restricted to the left side of the spinal cord? 9-11. What is the characteristic of sensory loss due to damage of peripheral nerves in a limb? 9-12. What is the pattern of sensory loss seen in diffuse damage to peripheral nerves (polyneuropathy)?
<urn:uuid:65f27e75-1f73-455e-91f1-34e6c107a8a5>
seed
Research from Japan has produced a NIRS probe small enough for implanted use in brain functional imaging of humans. Based on flexible printed circuit technology, the probe is also cheaper and simpler to produce than conventional micro fiber optics devices. Near infrared spectroscopy (NIRS) is a spectroscopic method for measuring blood volume and oxygen saturation in both brain and muscle tissue by the light absorption of oxyhaemoglobin (O2Hb) and deoxyhaemoglobin (HHb). Haemoglobin concentration and oxygenation are useful as indicators of activity within tissue and its metabolism; this is useful in a wide range of medical applications including the examination of muscles in sports medicine and examining brain function by using NIRS to detect and map active areas of the brain. There are several types of NIRS systems currently in use. Single-point systems can measure changes in concentration of O2Hb and HHb. Spatially-resolved NIRS, time-resolved NIRS and phase-modulated NIRS can measure absolute values of concentration of haemoglobin, oxygen saturation and blood volume. Brian functional imaging by NIRS is made possible by multichannel NIRS systems. NIRS probes need to incorporate both a light source and sensors to read the way the light interacts with the tissue. The optical probes of existing NIRS systems are designed for non-invasive use, attaching to the outside of the body. Current commercially available probes are typically around 10 x 30 mm in area and 5 mm thick. With probe volumes in the region of 1500 mm3, they are not suited to implanted use or for measuring activity in the brains of small animals or use in evaluating the epileptogenic focus in the cerebral cortex. Small volume, high volume The probe design used in the work from Shizuoka University is 100 times smaller than these conventional NIRS probes with a volume of only 15 mm3. The new design is also easier to manufacture than alternatives as it is fabricated using flexible printed circuit (FPC) technology rather than using micro fibre optics, as author Dr Masatsugu Niwayama explained: "When using micro fibre optics to manufacture implantation probes, expert techniques are required to connect two fibre cores. Fragility of the fibre also increases the difficulty of probe assembly. Recently, FPC substrate with wire bonding can be ordered from many printed circuit board companies without much difficulty. The easy manufacturing would enable us to mass produce the probes and even make them disposable items. Disposable probes will be effective to help prevent infection in hospitals." The team have previously developed a similar size of probe that was only able to measure changes in haemoglobin concentration and had the problem that the measurement sensitivity of each tissue type was unclear quantitatively. In the work presented in their Letter they have addressed both of these issues, presenting absolute value measurements of haemoglobin concentration based on theoretical sensitivity analysis and a spatially resolved method. They also present analysis of the difference in measurement sensitivities for the brain and scalp between implanted sensors and body surface sensors. Since completing the reported work the Shizuoka team have already conducted some in vivo tests with animals and are confident that use in humans should follow within a year for acute (short term) NIRS-ECoG (electrocorticography) simultaneous measurement in patients during neurosurgery. The key hurdle remaining to clear the probe for chronic (long term) implantation in human subjects is completion of toxicity/biocompatibility tests under chronic implantation in large animals to prove the safety and reliability of the device for extended use. Meanwhile, the team are working on increasing the NIRS recording channels to observe the localised/distributed hemodynamic activity related to brain diseases. "Our interests are now moving into revealing the connectivity between neuronal electrophysiological activity and the focal hemodynamics, and its difference between the normal and abnormal (diseased) brain. We also hope that our method will solve other remaining problems of clinical NIRS application, such as artefacts due to scalp blood flow," said Niwayama. "Recently a small number of researchers have been trying to observe the cortical hemodynamics directly from the brain surface for high accuracy recording, and our device can facilitate the spread of this experimental method since the measurement system can be manufactured less expensively than optical-fibre-based products." The team's hope is that in the next ten years work like theirs will enable new findings in cortical hemodynamics that will then see the use of non-invasive NIRS equipment become widespread in the diagnosis of brain functional diseases. Explore further: Researchers developing new approach for imaging dense breasts for abnormalities More information: "Implantable thin NIRS probe design and sensitivity distribution analysis." M. Niwayama, T. Yamakawa. Electronics Letters, Volume 50, Issue 5, 27 February 2014, p. 346 – 348. DOI: 10.1049/el.2013.3921 , Print ISSN 0013-5194, Online ISSN 1350-911X
<urn:uuid:f3704680-a66b-4a83-82eb-57e86646e1f7>
seed
Pneumonia still causes around two million deaths among children annually (20% of all child deaths). Any intervention that would affect pneumonia mortality is of great public health importance. This meta-analysis provides estimates of mortality impact of the case-management approach proposed by WHO. We were able to get data from nine of ten eligible community-based studies that assessed the effects of pneumonia case-management intervention on mortality; seven studies had a concurrent control group. Standardised forms were completed by individual investigators to provide information on study description, quality scoring, follow-up, and outcome (mortality) data with three age groups (<1 month, < 1 year, 0·4 years) and two mortality categories (total and pneumonia-specific). Meta-analysis found a reduction in total mortality of 27% (95% Cl 18—35%), 20% (11—28%), and 24% (14—33%) among neonates, infants, and children 0·4 years of age, respectively. In the same three groups pneumonia mortality was reduced by 42% (22—57%), 36% (20—48%), and 36% (20—49%). There was no evidence of publication bias and results were unaltered by exclusion of any study. A limitation of the included studies is that they were not randomised and, because of the nature of the intervention, could not be blinded. Community-based interventions to identify and treat pneumonia have a substantial effect on neonatal, infant, and child mortality and should be incorporated into primary health care. To read this article in full you will need to login or make a payment a Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD, USA Correspondence: Professor Robert E Black, Department of International Health, Johns Hopkins Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, MD 21205, USA. Tel +1 410 955 3934; fax +1 410 955 7159
<urn:uuid:abcf8f6b-2797-40df-aa33-68384dc7801b>
seed
DEAR DOCTOR K: Should I request “advanced” cholesterol testing at my next checkup? DEAR READER: A standard cholesterol test, or lipid profile, measures levels of HDL, LDL, total cholesterol and triglycerides in the blood. So-called “advanced” cholesterol testing is a more detailed version of this test. Cholesterol is a waxy, yellowish fat. It travels through your bloodstream in tiny, protein-covered particles called lipoproteins. These particles contain cholesterol and triglycerides, a type of fat. The smallest and densest particles are high-density lipoproteins (HDL). Also known as “good” cholesterol, HDL removes cholesterol from artery walls. Low-density lipoprotein (LDL) particles are known as “bad” cholesterol. They add cholesterol to the artery walls. There, it creates artery-clogging plaque that can trigger a heart attack or stroke. Most doctors use LDL levels to predict heart attack risk. But many people with heart disease have LDL levels that aren’t especially high. It turns out that not all LDL particles are created equal. Larger, fluffier LDL particles may have a harder time getting into arteries. Smaller, more tightly packed LDL may have an easier time getting into arteries, making them more dangerous. And a particular protein on LDL — apoB — further increases heart disease risk. Advanced cholesterol tests measure LDL sub-particles as well as apoB. My colleague Dr. Jorge Plutzky is director of the Lipid/Prevention Clinic and co-director of Preventive Cardiology at Harvard-affiliated Brigham and Women’s Hospital. He says that for the average person, the additional detail from advanced lipid testing isn’t critical. It wouldn’t change the treatment a doctor would have otherwise recommended. Why does he say that? Because there are plenty of new tests that have theoretical value in determining a person’s risk of heart disease and stroke. But they should be considered standard, generally recommended tests only if they actually make the prediction of risk more accurate. For a new test to become standard, it needs to be used along with standard tests in thousands of people. Those people then are followed for many years. Some will develop heart disease or have a stroke, and some will not. The standard tests, since they’ve already been proven, will do a good job. While they won’t be perfect, they will identify people who are more likely to develop disease. But will the new test, when added to the standard tests, make that prediction even more accurate? That’s what has to be shown for doctors to use the new test. Dr. Plutzky thinks that advanced cholesterol testing might help to better understand heart disease risk in certain groups of people. These include people with: • A history of cardiovascular disease (CVD) without obvious risk factors (such as high blood pressure or diabetes); • A history of CVD before age 55 in men or before age 65 in women; • A parent or sibling with early heart disease. Whatever you decide about the test, don’t forget tried-and-true strategies for lowering heart disease risk. These include eating a healthy diet, exercising regularly and maintaining a healthy weight. Dr. Komaroff is a physician and professor at Harvard Medical School. To send questions, go to AskDoctorK.com, or write: Ask Doctor K, 10 Shattuck St., Second Floor, Boston, MA 02115.
<urn:uuid:fd8749e9-5238-4236-9bc4-e9f1cdfa69be>
seed
What is a central venous line or Central Venous Catheter? In medicine, a central venous line(“central venous catheter”, “CVC”, “central line” or “central venous access catheter”) is a catheter with multiple openings(lumens) a the end tip, placed into a large vein in the neck (internal jugular vein), chest (subclavian vein or axillary vein) or groin (femoral vein). It is used to administer medication or fluids, obtain blood tests (Blood& Pathology tests in Intensive Care), and directly obtain cardiovascular measurements such as the central venous pressure(also known as ‘filling’ pressure. The line that is inserted at the elbow is called a PICC (Peripherally Inserted Central Catheter), and the lines that enter the shoulder, neck or groin are called Central Venous Lines. The term central line will be used from this point on. When the line is not in use, the access lines are ‘capped’ with a plastic bung and a clamp, in order to avoid blood leaking out of the line and in order to avoid infections. The central line is usually sutured(stitches) in at the entry point to the blood vessels and is also secured with a transparent dressing to keep the line clean and visible. The Central Venous line can be kept in for up to 10 days, but this can vary from ICU to ICU, as different protocols in different units apply. But the longer the Central venous line is kept in place, the higher the risk for an infection, caused by Bacteria moving into the blood stream. The infection risk can be diminished by changing the central line regularly. Why does your loved one need a central line? How is it inserted? What are the risks of central lines? Keeping the line clean Why does your loved one need a central line ? Indications for the use of central lines include: - Monitoring of the central venous pressure (CVP) in acutely ill patients to quantify fluid balance - Long-term Intravenous antibiotics - Long-term Parenteral nutrition(TPN) especially in chronically ill patients - Long-term pain medications - Drugs that are prone to cause phlebitis(inflammation of the veins) in peripheral veins, such as: - Calcium chloride - Hypertonic saline - Potassium chloride - vasopressors, inotropes or vasodilators (e.g. epinephrine, Noradrenaline dopamine) - Plasmapheres(removal, treatment, and return of components of blood plasma from blood circulation) - Peripheral blood stem cell collections - Haemodialysis or Haemofiltration(Dialysis Catheter, Dialysis Machines) - Frequent blood draws - Frequent or persistent requirement for intravenous access - Need for intravenous therapy when peripheral venous access is impossible - Blood and blood product transfusions Central venous catheters usually remain in place for a longer period of time than other venous access devices, especially when the reason for their use is longstanding (such as total parenteral nutrition/TPN in a chronically ill Patient). For such indications, a Hickman line, a PICC line or a portacath may be considered because of their smaller infection risk. Sterile technique is highly important here, as a line may serve as a place of entry for pathogenic organisms, and the line itself may become infected with organisms such as Staphylococcus aureus. How is it inserted? The doctor or a nurse specialist performs the procedure, cleans the skin and applies local anesthetic if required. The location of the vein is then identified by landmarks or with the use of a small ultrasound device. A hollow needle is advanced through the skin until blood is aspirated; the color of the blood and the rate of its flow help distinguish it from arterial blood, suggesting that an artery has been accidentally punctured if the colour is light red and blood is ‘pumping out’. Ultrasound probably now represents the gold standard for central venous access and skills. Prior tothe procedure, your loved one my be given sedation so that your loved one is drowsy and relaxed during the procedure. The line is then inserted using the Seldinger technique: a blunt guidewire is passed through the needle, then the needle is removed. A dilating device may be passed over the guidewire to slightly enlarge the tract. Finally, the central line itself is then passed over the guidewire, which is then removed. All the lumens of the line are aspirated (to ensure that they are all positioned inside the vein) and flushed. A Chest X-ray is typically performed afterwards to confirm that the line is positioned inside the superior vena cava and, in the case of insertion through the subclavian vein, that no pneumothorax was caused as a side effect. A PICC line is also inserted in a sterile fashion. - A tourniquet is applied to the arm and the area is cleaned and draped; - Local anaesthetic is injected into the skin near the vein; - A cannula is then inserted into the vein, the needle is removed, and the tourniquet is released; - A wire is inserted through the cannula and further into the vein; - A larger catheter is then inserted over the wire to enlarge the skin opening, and to hold the vein open; - The wire is removed and all that remains in the arm is the catheter; - The PICC is then thread up the arm vein through the catheter to a previously measured level; and - The PICC is then secured with a Stat lock or dressing. The injection of local anaesthetic is usually the part of the procedure that causes the most discomfort. Once the needle or cannula is sitting in the vein the remainder of the procedure is not painful. Some minor bleeding may be seen at the insertion site on the first day. What are the risks of central lines? As with most procedures there is a small risk of complications. - Pneumothorax (accidental puncture of the lung) for central lines placed in the chest. The incidence is thought to be higher with subclavian vein catheterization. In catheterization of the internal jugular vein, the risk of pneumothorax can be minimized by the use of ultrasound guidance. - Infection- is possible with any line, central or otherwise, and the risk increases with the age of the line. About one line in 20 will become infected. The signs of infection include redness, swelling and tenderness around the line as it enters the skin and fever or chills. If a line infection has occurred, usually the line has to be removed. - venous thrombosis(blood clot in the veins) Keeping the line clean Prevention of infection is an important consideration. For this reason touching to central line is discouraged and anyone who does must wash their hands first. Cleanliness of the skin around the central line is paramount and it is not be touched unless hands have been washed prio to touching the line. The CVC dressing needs to be changed every 2-3 days or as needed. Of course, if you have any questions or concerns, please discuss them with the ICU nurses and doctors. All Intensive Care interventions and procedures carry a degree of potential risk even when performed by skilled and experienced staff. Please discuss these issues with the medical and nursing staff who are caring for your loved one. The information contained on this page is general in nature and therefore cannot reflect individual patient variation. It is meant as a back up to specific information which will be discussed with you by the Doctors and Nurses caring for your loved one. INTENSIVE CARE HOTLINE attests to the accuracy of the information contained here BUT takes no responsibility for how it may apply to an individual Patient. Please refer to the full disclaimer. - How long should a Patient be on a ventilator before having a Tracheostomy? - How long can a breathing tube or an endotracheal tube can stay in? - How long is a Patient kept on a BIPAP machine in Intensive Care? - What is an induced coma and why is my critically ill loved one in an induced coma? - The 3 most dangerous mistakes that you are making but you are unaware of, if your loved one is a critically ill Patient in Intensive Care - How to always achieve your goals whilst your loved one is critically ill in Intensive Care - Why you must make up your own mind about your critically ill loved one’s situation in Intensive Care even if you’re not a doctor or a nurse! - Follow this proven 5 step process on how to be in control and influential if your loved one is a long-term Patient in Intensive Care - How to quickly take control and have real power and influence if your loved one is critically ill in Intensive Care - Why does my loved one need a Tracheostomy in Intensive Care? - Tracheostomy and weaning off the ventilator in Intensive Care, how long can it take? - Be more selfish if your loved one is critically ill in Intensive Care - How to stay positive if your loved one is critically ill in Intensive Care - 4 ways on how to be more persuasive if your loved one is critically ill in Intensive Care - My sister has been in ICU for 21 weeks with Tracheostomy and still ventilated. What do we need to do? - Severe lung failure and my aunty is not expected to survive… - 3 quick steps on how to position and prepare yourself well mentally, whilst your loved one is critically ill in Intensive Care - How to get what you want whilst your loved one is critically ill in Intensive Care - 5 steps to become a better negotiator if your loved one is critically ill in Intensive Care - How to make sure that your values and beliefs are known whilst your loved one is critically ill in Intensive Care - How to make sure that “what you see is always what you get” whilst your loved one is critically ill in Intensive Care - What the doctors and the nurses behaviour in Intensive Care is telling you about the culture in a unit - How long does it take to wake up from a Traumatic brain injury or severe head injury - How to take control if your loved one has a severe brain injury and is critically ill in Intensive Care - Family Meetings in Intensive Care or the Elephant in the Room - What you need to do if your loved one is dying in Intensive Care(part one) - What you need to do if your loved one is dying in Intensive Care(part two) - Intensive Care at its best? - How INTENSIVECAREHOTLINE.COM Can Help You - What you and your Family need to do if your critically ill loved one is very sick in Intensive Care and faces an uncertain future - How long can somebody stay in Intensive Care? - My Family can’t agree on what’s best for my sister in Intensive Care…Help! - My husband is dying in Intensive Care, but we need more time… - My mother sustained serious brain damage after a stroke and she now is in multi- organ failure
<urn:uuid:baf37905-f7b0-4509-902d-31665321d815>
seed
- What is it? - Facts to Know - Questions to Ask - Key Q&A - Lifestyle Tips - Organizations and Support What is it? Genital herpes is a contagious infection caused by a virus known as herpes simplex virus (HSV). According to the U.S. Centers for Disease Control and Prevention, genital herpes affects about 16.2 percent of the population and approximately one in six people aged 14 to 49. Although the infection can be serious for newborn babies and people who are chronically ill, rarely is it fatal. While there is still no known cure, genital herpes does respond well to treatment. There are two types of herpes simplex virus, herpes simplex virus type 1 (HSV-1) and herpes simplex virus type 2 (HSV-2). Both types are related to the family of viruses that cause chicken pox and shingles. Both HSV-1 and HSV-2 can cause genital herpes. "Oral herpes" causes sores and blisters on the lips and gums and in the mouth—typically referred to as cold sores. Oral herpes is very common and can be spread by kissing or oral sex. It is usually caused by HSV-1. "Genital herpes" causes sores in the genital area. The sores it causes often are painful and sometimes itchy. Genital herpes can cause serious health problems in infants who become infected by their mothers during delivery and in people whose immune systems are weakened. Genital herpes can be caused by HSV-1 or HSV-2; it is most often caused by HSV-2. For reasons not entirely clear, many people with genital herpes either have no visible symptoms or don't recognize the symptoms. The virus can be transmitted with or without symptoms being present. But the major concern with both oral and genital herpes is that you remain infected for life and there is no cure. When it does cause symptoms, genital herpes can produce sores in and around the vaginal area, on the penis, around the anal opening and on the buttocks or thighs. Occasionally, sores also appear on other parts of the body where broken skin has come into contact with the virus. HSV remains dormant in certain nerve cells of the body for life, causing periodic symptoms in some people while remaining dormant for life in others. Like other genital ulcer diseases, genital herpes increases both the risk of acquiring and transmitting HIV, the virus that causes AIDS, by providing a point of entry or exit for HIV. One of the most bewildering and frustrating aspects of genital herpes is the periodic outbreak of sores that infected people often experience. Recurrences of genital herpes can be upsetting and sometimes painful. Moreover, the emotional stress over transmitting the disease to others and disrupting sexual relations during outbreaks, as well as informing your sexual partner of your infection status, can take a toll on personal relationships. With proper counseling, improved treatments and prevention measures, however, couples can cope with and manage the disease effectively. Genital herpes is acquired by sexual contact with someone who is infected. A decade ago, it was believed that the virus could be transmitted only when the virus was active and causing symptoms, such as sores and blisters. Now, it is known that the virus can spread even when there are no symptoms (called asymptomatic transmission). In addition, recent research suggests that a large proportion of people who appear to have no symptoms do have symptoms that they just don't recognize. If you have oral herpes, you also can transmit the infection to the genital area of a partner during oral-genital sex. Some genital herpes infections in the United States are due to HSV-1; presumably, many of these were transmitted during oral sex. There is no documented case of herpes being spread by contact with objects such as toilet seats or hot tubs. While rare, transmission is possible from skin-to-skin contact through open sores. Prudent hand-washing and personal hygiene decrease or nearly eliminate that risk. Genital herpes is not always easy to diagnose because signs and symptoms vary greatly. Some studies show that as many as two-thirds of all people infected with genital herpes will experience either no symptoms or will have symptoms so mild or atypical that they will not notice them or will mistake them for something else, like a yeast infection. Recent research has shown that after receiving health education about symptoms of genital herpes, many people who were thought to have asymptomatic infection (infection with no symptoms) were able to recognize symptoms. The first episode of genital herpes is referred to as the primary outbreak, an episode occurring within a week or two after exposure. When it produces symptoms, the primary outbreak is characterized by lesions at the infection site and can be accompanied by flu-like symptoms, including headache, fever, painful urination and swollen glands in the groin. Usually, small red bumps appear first, develop into blisters and then become painful open sores. Lesions can occur on the pubic hair area, vulva and perineum, inside the vagina and on the cervix in women, on the penis in men, at the rectum or the urethral opening of women and men or on the buttocks or thighs. These lesions usually heal within two to four weeks. Scabs may form on skin surfaces, such as the penis, but not on mucosal surfaces such as the vagina. Not all individuals who are exposed to the virus will experience a primary episode directly following exposure, or the symptoms may be so mild that they go unrecognized. Almost immediately after HSV infects your body and before symptoms appear, the virus travels to a sensory nerve root at the base of the spinal column called the sacral ganglion. It remains there in a latent or dormant stage indefinitely. In some people the virus reactivates and travels back to the skin, where it multiplies until it erupts at the surface in the form of a sore. An itching, tingling or burning sensation in the genital area or buttocks often signals an upcoming episode. These warning symptoms are called the prodrome. Most people who have primary infection will experience periodic outbreaks, or recurrences. For many, symptoms will reappear an average of four or five times a year lasting about five to 10 days. Between outbreaks, the virus retreats to the sacral ganglion in the spine where it is protected from the body's immune system. Infected people develop antibodies in response to genital herpes infection but, unfortunately, HSV antibodies cannot completely protect a person against different HSV types or against reactivation of the dormant virus. Periodic outbreaks tend to become less frequent and less severe over time. Eventually outbreaks may disappear altogether. Not all outbreaks have symptoms, and the virus may continue to be transmitted from a variety of sites in the genital area or in genital secretions or from lesions that are hidden or too small to notice. The trigger for these recurrences is not known. Stress, menstruation, infections and emotional distress may contribute. However, research has shown that episodes can recur when these factors are absent. Although sores may be visible to the naked eye, laboratory tests may be needed to distinguish herpes sores from other infections. For several years, the most common method of diagnosis has been the viral culture. A new sore is swabbed or scraped, and the sample is added to a laboratory culture containing healthy cells. When examined under a microscope after several days, the cells show changes that indicate growth of the herpes virus. A major disadvantage of viral culture is that the specimen must be collected from a lesion or sore; when the lesion begins to heal, the test becomes unreliable. A test called the polymerase chain reaction (PCR) test is more sensitive than standard culture tests at identifying the herpes virus in the urinary and genital tracts; however, it is expensive and therefore not used very often. Blood tests have become more popular because they can detect evidence of infection even when sores are not present. These tests can be done on a small amount of blood taken from the arm or finger and, in some settings, results may be available immediately. Because they detect antibody (made by the body in response to the infection) they may not be positive until several weeks after exposure. Because most HSV-2 is genital, a positive blood test for HSV-2 usually indicates genital herpes. Because so many people in the United States have cold sores due to HSV-1, it is not routinely done. However, because genital herpes may be caused by HSV-1, a negative test for HSV-2 does not rule out genital herpes infection due to HSV-1. Interpretation of test results should be done by a clinician. A major advantage of the HSV-2-specific test is that it can be done when no sore is present. It may, therefore, detect infection in people who have not had recognized symptoms. Counseling at the time of diagnosis and ongoing support is important for everyone with genital herpes. Such support may be especially important for those who are diagnosed but have no symptoms. Although treatment and counseling are similar for genital herpes, whether caused by HSV-1 or HSV-2, knowing the type of HSV may be helpful for the health care professional. For example, genital herpes caused by HSV-1 usually presents with milder symptoms and less frequent outbreaks. Although herpes cannot be cured, there are several drugs that can reduce the intensity of symptoms, as well as the number of recurrences. Acyclovir (Zovirax), valacyclovir (Valtrex) and famciclovir (Famvir) are all prescription antiviral drugs that are effective in treating genital herpes. Dosage, frequency and duration of treatment vary depending upon the individual and the type of treatment. They are taken by mouth. Topical creams are ineffective. Intravenous treatment may be used in the hospital specifically for individuals who have a suppressed immune system, that is, those who have HIV/AIDS. Since all three drugs are good, effective antivirals, decisions about which to use usually take into account convenience and cost. Valacyclovir has been approved by the U.S. Food and Drug Administration for prevention of genital herpes transmission. However, while valacyclovir significantly decreases the risk of sexual transmission of herpes, transmission can still occur. Also, it isn't known whether or not valacyclovir prevents transmission of genital herpes in same-sex couples. Treatment can be taken in different ways. "Episodic therapy" is taken at the first appearance of symptoms. This therapy involves taking daily dosages of a drug until symptoms subside, usually for a course of one to five days. The antiviral drugs are safe, have few side effects, shorten the length of first episodes and reduce the severity of recurring outbreaks, especially if taken within 24 hours of the onset of prodromal symptoms. Episodic therapy will not prevent transmission between episodes. For those who have frequent recurrences, "suppressive therapy" can keep the virus in check indefinitely. This treatment involves daily medication, even when you have no symptoms. It can reduce the number of recurrences significantly. Suppressive therapy also reduces the chances that an infected person will transmit the virus to a sexual partner primarily because it reduces asymptomatic shedding of the virus. As for other treatments, there is some indication that some natural remedies such as zinc, vitamins C and A, lysine, Siberian ginseng and echinacea may enhance the immune system's response to herpes. Aloe vera extract and other topical ointments may speed healing time of lesions, but experts caution that topical treatment of sores appears to have no added benefit when used in conjunction with antiviral drugs. No natural therapy has been proven to benefit people with herpes. Treating women who develop genital herpes during pregnancy is critical to protecting newborns from acquiring the virus. Nearly half of the babies infected with herpes either die or suffer neurologic damage. Babies born with herpes can also develop encephalitis (inflammation of the brain), severe rashes and eye problems. Fortunately, only a small percentage of women with HSV pass the infection onto their babies. The risk of transmission to an infant varies greatly depending on when a woman is infected. A pregnant woman who develops a first episode of genital herpes during her pregnancy is at highest risk of passing the virus to her fetus and may be at higher risk for premature delivery. If a mother has her first outbreak near or at the time of a vaginal birth, the baby's risk of infection is high. If the outbreak is a recurrence—meaning the mother was infected before she was pregnant—the baby's risk is much lower. Overall, studies show that less than 2 percent of pregnant women with HSV acquired the virus during pregnancy. Before much was known about how HSV is transmitted from mother to baby during birth, many pregnant women with the virus were given cesarean sections, regardless of when they became infected. Today, cesarean sections are limited to women who have detected sores in or near the birth canal at the time of labor. Women whose virus is active late in pregnancy may be put on suppressive therapy to help prevent transmission to the infant. If an infant is infected, the antiviral medication acyclovir can greatly improve the outcome, particularly if treatment starts immediately. With early detection and treatment, most of the serious complications of neonatal herpes can be lessened. The drug acyclovir appears to be safe in pregnancy, but it should only be used when the benefits of taking the drug outweigh the risks. At this time, there isn't as much information on the safety of valacyclovir and famciclovir, but both are classified as class B agents by the FDA (no evidence of risk in humans), similar to the risk of acyclovir. Comprehensive treatment should include education and counseling. Referrals to support groups as well as online resources may help people with genital herpes adjust to this recurring condition. Any type of unprotected vaginal, anal or oral-genital sex can transmit herpes. Until a vaccine is developed or research proves that antiviral drugs can stop transmission, the only effective means of preventing herpes is abstinence or consistent and correct condom use. However, even condoms are not risk-free because lesions can occur outside of the area protected by condoms. The risk of transmission is greatest when an outbreak occurs. As a rule, experts say it is best to abstain from sex when symptoms are present and to use condoms between outbreaks. Since oral herpes can be passed to the genitals from oral contact, it is prudent to abstain from oral sex if a cold sore is present. Couples in long-term monogamous relationships in which one partner is infected must weigh the risk of infection against the inconvenience of always having protected sex. Most infections take place fairly early in a relationship and research indicates that a person may become less infectious over time. Various vaccines are being developed and tested for HSV-2. The Herpevac Trial for Women, an eight-year clinical trial involving 50 trial sites and more than 8,000 women, investigated a vaccine to protect women against genital herpes disease. The trial, which wrapped up in 2010, found the vaccine to be ineffective. The study did, however, produce important scientific information to help guide future research toward a vaccine that will prevent genital herpes. Women can use dental dams or plastic wrap to cover the vulva and help protect their partners from contact with body fluids during oral sex. The only dental dams approved by the FDA for oral sex are Sheer Glyde dams. Because transmission can occur even when no lesions are present, always place a latex barrier between you and your partner's genitals and anus. Again, couples should abstain from sex during outbreaks, until the skin is fully healed. Lesbians or bisexual women should be aware that the herpes virus can be transmitted when a lesion from one woman comes into contact with the oral mucosa or the genital mucosa of her female partner. If you experience an outbreak, whether primary or recurrent, you need to follow a few simple steps to improve healing and avoid spreading the infection to other parts of your body or to other people: - Keep the infected area clean and dry to prevent secondary infections from developing. - Avoid touching sores, and wash hands after contact with sores. - Avoid sexual contact until sores are completely healed (that is, scabs have fallen off and new skin has formed over the site of the lesions). People with early signs of a herpes outbreak or with visible sores should not have sex from the development of the first prodromal symptom until the sores have healed completely. Facts to Know Facts to Know 1. Some studies show that most people infected with genital herpes don't know they are infected because they have no visible or no recognized symptoms. 2. Although transmission to infants is rare (only a small percentage of women with herpes pass the infection on to their babies), genital herpes causes death or neurological damage in nearly half of untreated newborns who become infected at birth. 3. Some genital herpes infections are caused by HSV-1 (oral herpes), probably resulting from oral-genital sex. 4. As many as half of infected persons who have recurrent episodes will experience localized tingling and irritation at the site of infection, usually 12 to 24 hours prior to an outbreak. Recurrences average two to six per year but vary widely. 5. Preventive therapy can decrease the frequency and severity of recurrent outbreaks by up to 90 percent. However, therapy doesn't significantly reduce the frequency of recurrences once it is stopped. Recurrence also tends to lessen in intensity and duration over time. 6. Without treatment, recurrent infections usually last five to 10 days. 7. The first episode of infection, called the primary outbreak, is usually the most severe. 8. Although herpes vaccine research is being conducted, no vaccine is currently available. Questions to Ask Questions to Ask Review the following "Questions to Ask" about herpes so you're prepared to discuss this important health issue with your health care professional. 1. How, what and when should I tell my partner about my infection? 2. Does my partner need to be tested? 3. Does having genital herpes mean I can't or shouldn't get pregnant? 4. How will I be monitored for outbreaks once I am pregnant? 5. Do I have to be careful about passing the virus to my children through casual contact? 6. How can I predict when I'm going to have another outbreak? 7. When am I at greatest risk for transmitting the virus to my partner? 8. Is there a cure for herpes? How can drug treatments help me? 9. How do I decide whether I need drugs to control recurrences? 10. How do I choose among the drugs available for treating herpes and preventing recurrences? 11. What can I do to make herpes outbreaks less painful? 12. Are there any support groups for people with herpes? Where can I find someone to talk to who understands how I'm feeling? 1. What is my risk as a woman of transmitting genital herpes to my male partner? The risk of infection is based on several factors, but according to one report, in heterosexual couples in which only one partner is infected, over one year, the virus was transmitted in 10 percent of cases. In 70 percent of these couples, transmission took place when the infected person had no symptoms. 2. Does having oral herpes protect me from genital herpes? No. Some experts have speculated that having oral herpes reduces the chances of acquiring genital herpes, but most authorities believe there is no significant cross-protection between the two types. 3. If I have no symptoms, doesn't that mean I am less likely to transmit the virus? Yes and no. The amount of virus produced is greatest at the time someone is having an outbreak. Yet many outbreaks occur without the person realizing it. Lesions can be so small or hidden from view that only special tests can prove one is having an outbreak. It is estimated that most people with herpes are infectious at some point in their lives when they don't have visible symptoms. 4. How effective are condoms for preventing infection? They are effective. The virus cannot penetrate through latex barriers. However, it is possible, although rare, to acquire infection during skin-to-skin contact if a lesion is present and not covered by a condom. 5. How is genital herpes diagnosed? Visual inspection is the most common way of making a diagnosis. Viral cultures are also commonly used, but only when a sore is present. Blood tests also are available that can accurately determine infection and can accurately distinguish between the two types of HSV, even when no symptoms are apparent. 6. How safe are the drugs used for treating genital herpes? All of the commonly used prescription medications are well-tolerated and have few short-term side effects. Acyclovir has been studied the longest, and its long-term safety appears to be good, both in pregnant women and in children. 7. What causes recurrent outbreaks of genital herpes? This is an important question and hard to answer definitively. Some factors that have been studied include stress, poor diet, birth control pills, sunlight, menstruation and fatigue. However, there is no way of accurately predicting when the next outbreak will come. Keeping one's immune system strong is important. (Persons with weakened immune systems, such as HIV-positive individuals, have more frequent and severe outbreaks than HIV-negative persons.) Experts also recommend getting emotional support, as the psychological impact of genital herpes often is more upsetting than the physical symptoms. 1. Talk to your date about herpes If you have herpes and are planning to become sexually active with someone new, you owe it to them and to yourself to be honest about your own infection. You can spread the infection even if your virus is dormant and you have no open sores. Try practicing telling with a trusted friend or in front of a mirror. And stay calm. Keep your words simple and clear, and be prepared to answer any questions. In general, those with herpes find that with time and a better understanding of the disease, telling new partners becomes easier. They also discover that herpes doesn't affect their intimate relationships and sex lives as much as they originally feared. 2. Take precautions for oral sex Unprotected oral sex is no guarantee of protection against sexually transmitted diseases. Most sexually transmitted diseases can be spread via oral sex. To protect yourself, make sure your partner uses a condom if you're performing oral sex; if he's performing oral sex on you or if you're having oral sex with a woman, use a dental dam, a flat piece of latex, to cover the vulva. You can get them in some medical supply stores. They provide a barrier between the mouth and the vagina or anus during oral sex. Household plastic wrap or a split and flattened, unlubricated condom can also be used if you don't have a dental dam. Also, don't brush or floss your teeth right before having oral sex. Either may tear the lining of your mouth, increasing your exposure to viruses. 3. Practice the best protection The best protection against any type of sexually transmitted disease is a latex condom. However, it doesn't provide 100 percent protection against STDs––only abstinence does. If you use a condom, make sure you use it properly. Human error causes more condom failures than manufacturing errors. Use a new condom with each sexual act (including oral sex). Carefully handle it so you don't damage it with you fingernails, teeth or other sharp objects. Put the condom on after the penis is erect and before any genital contact. Use only water-based lubricants with latex condoms. Ensure adequate lubrication during intercourse. Hold the condom firmly against the base of the penis during withdrawal, and withdraw while the penis is still erect to prevent slippage. 4. Get tested for STDs No one test screens for all STDs. Some require a vaginal exam; others a throat or rectum swab or a blood or urine test. Just because you have a negative test doesn't mean you don't have the disease. Chlamydia, for instance, may travel far up into your reproductive tract, so your health care professional is unable to obtain a culture. Or your body may not have developed enough antibodies to a virus like HIV or HSV to turn up in a blood test. Still, it's important to ask your health care professional to regularly test you for STDs if you're sexually active in a nonmonogamous relationship (or have the slightest concern about your partner's fidelity). You can get tested at your health department, community clinic, private physician or Planned Parenthood. Or call the American Social Health Association’s STD hotline at 1-800-227-8922 or the Centers for Disease Control and Prevention at 1-800-CDC-INFO or 1-800-232-4636 for free or low-cost clinics in your area. 5. Know whether you have an STD While some STDs may present with symptoms such as sores or ulcers or discharge, most, unfortunately, have no symptoms. Women are even more likely than men to have STDs without symptoms. Women are also more likely to develop serious complications from STDs. You can't always tell if you or a partner has an STD just by looking. Don't rely on a partner's self reporting and assume that will prevent you from acquiring an STD; many infected persons do not know they have a problem. They may think symptoms are caused by something else, such as yeast infections, friction from sexual relations or allergies. Educate yourself about your own body and, in turn, learn about your own individual risk for contracting an STD. One way to do this is to schedule an examination with a health care provider who can sit down with you and help you learn the principles for staying safe and sexually healthy. Don't allow fear, embarrassment or ignorance to jeopardize your future. 6. Talk to your children about STDs Sexually transmitted diseases are particularly common among adolescents. And it's an issue kids are concerned about. As a parent, you can play a large role in your adolescent's behavior, both in terms of the behavior you model yourself and in terms of the communication between you and your teen. Make sure your daughter has regular visits with a competent gynecologist, and that your son sees a medical professional who specializes in adolescent health at least once a year, if for nothing else than some plain talk about STDs and pregnancy. And talk to your kids. Study after study proves that when parents talk to their kids about sexual issues, their kids listen. Don't worry that talking about sex is the same as condoning it; hundreds of studies dispute that theory. In fact, studies show that when parents talk about sex, children are more likely to talk about it themselves, to delay their first sexual experiences and to protect themselves against pregnancy and disease when they do have sex. Organizations and Support Organizations and Support American College of Obstetricians and Gynecologists (ACOG) Address: 409 12th Street, SW P.O. Box 96920 Washington, DC 20090 American Social Health Association (ASHA) Address: P.O. Box 13827 Research Triangle Park, NC 27709 ASHA's STI Resource Center Hotline Address: American Social Health Association P.O. Box 13827 Research Triangle Park, NC 27709 Association of Reproductive Health Professionals (ARHP) Address: 1901 L Street, NW, Suite 300 Washington, DC 20036 Address: 834 Chestnut Street, Suite 400 Philadelphia, PA 19107 CDC National Prevention Information Network Address: P.O. Box 6003 Rockville, MD 20849 Address: 1301 Connecticut Avenue NW, Suite 700 Washington, DC 20036 National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention Address: Centers for Disease Control and Prevention 1600 Clifton Road Atlanta, GA 30333 Hotline: 1-800-CDC-INFO (1-800-232-4636) National Family Planning and Reproductive Health Association (NFPRHA) Address: 1627 K Street, NW, 12th Floor Washington, DC 20006 Planned Parenthood Federation of America Address: 434 West 33rd Street New York, NY 10001 Hotline: 1-800-230-PLAN (1-800-230-7526) Sexuality Information and Education Council of the United States (SIECUS) Address: 90 John Street, Suite 704 New York, NY 10038 Controlling Herpes Naturally: A Holistic Approach to Prevention & Treatment by Michele Picozzi Herpes Simplex (Experience of Illness) by T. Natasha Posner Sexual Health Questions You Have...Answers You Need by Michael V. Reitano, Charles Ebel Sex: What You Don't Know Can Kill You by Joe S. McIlhaney, Marion McIlhaney Understanding Genital Herpes (Women's Health Care) by Gilles R. G. Monif Medline Plus: Genital Herpes Address: Customer Service 8600 Rockville Pike Bethesda, MD 20894 American Social Health Association Address: PO Box 13827 Research Triangle Park, NC 27709 Center for Disease Control Address: CDC Info 1600 Clifton Rd Altanta, GA 30333 American Academy of Family Physicians, FamilyDoctor: Genital Herpes Email: http://familydoctor.org/online/famdoces/home/about/contact.html (on-line contact form) "Acyclovir pregnancy and breastfeeding." Drugs.com. November 2012. http://www.drugs.com/pregnancy/acyclovir.html. Accessed December 2012. "Questions and answers: The Herpevac trial for women." The National Institute of Allergy and Infectious Disease. January 2012. http://www.niaid.nih.gov/news/QA/pages/HerpevacQA.aspx. Accessed December 2012. "Herpes Simplex: Home Remedies." The University of Maryland Medical Center. http://www.umm.edu/patiented/articles/what_home_treatments_preventive_measures_herpes_simplex_000052_8.htm. Accessed December 2012. "Genital Herpes—CDC Fact Sheet." The Centers for Disease Control and Prevention. September 2012. http://www.cdc.gov/std/herpes/STDFact-herpes.htm. Accessed February 2013. "Genital herpes." The Mayo Clinic. May 2009. http://www.mayoclinic.com/health/genital-herpes/DS00179. Accessed July 2010. "Patient information: Genital herpes." Uptodate.com. January 2010. http://www.uptodate.com/patients/content/topic.do?topicKey=inf_immu/4926&title=Genital+herpes. Accessed July 2010. Roe V. Living with genital herpes: how effective is antiviral therapy? J Perinat Neonat Nurs. 2004;18:206-21. "Sexually transmitted diseases treatment guidelines 2006." Centers for Disease Control and Prevention. MMWR. 2006;55(No. RR-11):1-94. CenterWatch: Clinical Trials Listing Service. Summary: Herpes Vaccine Study for Women. 2008. http://www.centerwatch.com/patient/studies/stu46678.html. Accessed May 15, 2008. Herpes Clinical Trials. 2008. Herpes-Coldsores.com. http://herpes-coldsores.com/herpes_clinical_trials.htm. Accessed May 15, 2008. "Genital herpes." U.S. Department of Health and Human Services. 2005. http://www.4woman.gov/FAQ/stdherpe.htm. Accessed April 2008. "Treatment and prevention of genital herpes simplex virus infection." Uptodate.com. February 2007. http://www.uptodate.com/online/content/topic.do?topicKey=viral_in/9158. Accessed April 2008. "Genital herpes simplex virus infection and pregnancy." Uptodate.com. March 2008. http://www.uptodate.com/online/content/topic.do?topicKey=viral_in/7178. Accessed April 2008. "Barrier products." The U.S. Food and Drug Administration. http://www.fda.gov/oashi/aids/bar.html. Accessed April 2008. "Genital Herpes: A Hidden Epidemic." The Food and Drug Administration. June 2004. http://www.fda.gov/Fdac/features/2002/202_herp.html. Accessed April 2008. "Genital Herpes: Prevention." The National Institute of Allergy and Infectious Diseases. July 2007. http://www3.niaid.nih.gov/healthscience/healthtopics/genital_herpes/prevention.htm. Accessed April 2008. "How Common is Genital Herpes?" Genital Herpes. May 2004. Center for Disease Control and Prevention. National Institutes of Health. http://www.cdc.gov/std/Herpes/STDfacts-Herpes.htm#common. Accessed September 11, 2004. "Introduction to STDs, herpes." American Social Health Association. Copyright 2001. http://www.ashastd.org/stdfaqs/index.html. Accessed September 11, 2004. "Guidelines for Treatment of Sexually Transmitted Diseases 2002." Centers for Disease Control and Prevention. MMMR. May 10, 2002;51(RR-6). http://www.cdc.gov/STD/treatment/. Accessed September 11, 2004. "Fact Sheet: Genital Herpes." National Institute of Allergy and Infectious Diseases, National Institutes of Health. September 2003. http://www.niaid.nih.gov/factsheets/stdherp.htm. Accessed September 11, 2004. "Lesbian Health Fact Sheet." The National Women's Health Information Center. November 2, 2000. http://www.4woman.gov/owh/pub/Lesbian.htm. Accessed September 11, 2004. American Social Health Association. http://www.ashastd.org. Accessed September 11, 2004. "STDs & Vaginal Infections in Women Who Have Sex With Women." Lesbian STD Home Page. http://depts.washington.edu/wswstd/. Accessed September 11, 2004. Corey L, Wald A, Patel R, et al. Valacyclovir HSV Transmission Study Group. Once-daily valacyclovir to reduce the risk of transmission of genital herpes. N Engl J Med. 2004 Jan 1;350(1):11-20. Last date updated: Tue 2013-02-12
<urn:uuid:403675d1-25fd-4c99-b195-431dbd8f9784>
seed
A number of malignant and benign otic tumors occur, usually manifesting with hearing loss. They may also manifest with dizziness, vertigo, or imbalance. These tumors are rare and can be difficult to diagnose. Malignant otic tumors: Basal cell and squamous cell carcinomas may arise in the ear canal. Persistent inflammation caused by chronic otitis media may predispose to the development of squamous cell carcinoma. Extensive resection is indicated, followed by radiation therapy. En bloc resection of the ear canal with sparing of the facial nerve is done when lesions are limited to the canal and have not invaded the middle ear. Deeper invasion requires a more significant temporal bone resection. Rarely, squamous cell carcinoma originates in the middle ear. The persistent otorrhea of chronic otitis media may be a predisposing factor. Resection of the temporal bone and postoperative radiation therapy are necessary. Nonchromaffin paragangliomas (chemodectomas) arise in the temporal bone from glomus bodies in the jugular bulb (glomus jugulare tumors) or the medial wall of the middle ear (glomus tympanicum tumors). They appear as a pulsatile red mass in the middle ear. The first symptom often is tinnitus that is synchronous with the pulse. Hearing loss develops, followed by vertigo. Cranial nerve palsies of the 9th, 10th, or 11th nerve may accompany glomus jugulare tumors that extend through the jugular foramen. Excision is the treatment of choice, and radiation is used for nonsurgical candidates. Benign otic tumors: Sebaceous cysts, osteomas, and keloids may arise in and occlude the ear canal, causing retention of cerumen and conductive hearing loss. Excision is the treatment of choice for all benign otic tumors. Ceruminomas occur in the outer third of the ear canal. These tumors appear benign histologically and do not metastasize regionally or distantly but they are locally invasive and potentially destructive and should be excised widely. Last full review/revision July 2013 by Bradley A. Schiff, MD Content last modified September 2013
<urn:uuid:900677a2-b8d1-4184-8e34-0e2d9e6b8365>
seed
Diagnosis of Ischemic Nephropathy The tests used to diagnose RAS and ischemic nephropathy are performed by radiologists (experts in imaging procedures, such as x-ray). These tests determine whether there is significant renal artery stenosis (narrowing) in association with ischemic nephropathy. To definitively diagnose RAS, the radiologist performs a renal arteriogram, an imaging test that highlights the renal arteries. This is an invasive procedure and is associated with complications; therefore, many alternative, less invasive tests have been developed to help identify RAS. The tests and their advantages and disadvantages follow. Renal angiogram (arteriogram) is the "gold standard" of RAS diagnosis, but it can have adverse effects. A contrast dye is injected to obtain images of the renal arteries. Patients with normal or nearly normal kidney function are at minimal risk for kidney damage caused by the injected dye; however, the risk increases in patients whose renal function is already impaired. The greater the impairment of renal function, the greater the risk of damage. Contrast dye can cause acute tubular necrosis (ATN, the death of tubular cells), which usually is reversible (on occasion, dialysis is needed while awaiting renal recovery). Patients with advanced, chronic renal failure (CRF) may progress to end-stage renal disease (ESRD) as a consequence of ATN. Another possible complication is a cholesterol emboli "shower." This is a rare, potentially devastating complication in which small pieces of cholesterol are dislodged from the walls of the arteries during the test. The emboli (clots or "lugs") get stuck in the smaller blood vessels of the kidneys and other organs causing more ischemia (deficient blood flow) and organ damage. Ultrasound of the kidney, with Doppler study of the renal arteries creates images of internal organs by means of ultrasound echos. The echos identify tissue-density changes and compare them with the blood flow in underlying vessels. Ultrasound of the kidney provides a useful indication of renal artery stenosis (RAS) if the test is positive. If the test is negative, however, RAS cannot be ruled out. Patients who are obese or are unable to hold their breath are not good candidates for Doppler studies. If the kidneys are asymmetrical (one kidney being significantly smaller than the other), the physician may suspect that the smaller one has atrophied (shrunk) due to prolonged hypoperfusion (poor blood flow). Kidney atrophy is a hallmark of significant RAS. A shrunken kidney usually is not worth salvaging because irreversible damage has occurred. Renal scan with ACE challenge is a safe, nuclear medicine test that helps the physician diagnose unilateral (one-sided) RAS in patients with renal vascular hypertension (high blood pressure in the kidney's blood vessels). The test relies on the difference between the kidneys' blood flow, which is accentuated by the use of an ACE (angiotensin, converting enzyme) inhibitor—an antihypertensive (anti-high blood pressure) medication. Renal scanning is not useful in patients with chronic renal failure (CRF), who have impaired function in both kidneys. Magnetic resonance angiography (MRA) is becoming the test of choice for diagnosing RAS and ischemic nephropathy. This noninvasive test uses magnetic resonance imaging (MRI). Magnetic dye is injected into a peripheral vein, and then the MRI takes pictures of the renal arteries. This dye is not toxic to the kidneys. Because MRA is relatively new, not all institutions have technicians experienced in administering this test. Patients who are unable to hold their breath have limited results, and patients with pacemakers cannot undergo this test because the MRI scanner is a large magnet. Ischemic Nephropathy Treatment Significant renal artery stenosis (RAS)—with or without ischemic nephropathy—presents a treatment challenge. Patients must be evaluated individually to determine which option will provide the best possible result. Treatments include the following: - Angioplasty alone - Angioplasty (surgical reconstruction of blood vessels) with placement of a stent (device used to support tubular connections) - Medical therapy - Surgical revascularization (restoration of blood supply) to bypass the RAS Aggressive interventions to open the narrowed artery (either surgically or with angioplasty and/or stenting) carry some degree of risk. Ischemic Nephropathy Prognosis RAS tends to progress in most cases. Over time, the kidney suffers reduced circulation, continues to atrophy (shrink), and loses more function. An untreated artery ultimately becomes completely occluded (blocked) in many patients.
<urn:uuid:8eb60185-ef7e-4c32-b3d2-ab7f326501f6>
seed
As of late (i.e., the latter half of 2013), there’ve been reports of outbreaks of illnesses having serious health consequences for animals or people. I recently wrote about two of them in my petMD Daily Vet column: Recently, reports of what seems to be an emerging virus have come from multiple states, including California, Michigan, and Ohio. As of October 3, 2013, circovirus has been confirmed in two dogs that have died in Ann Arbor, Michigan. Another four dog deaths in Ann Arbor are suspected to have been in part due to circovirus related illness. So, let’s get right down to it and discuss what is currently known about this virus. What Species is Circovirus Known to Infect? Circovirus is currently known to infect birds, dogs, and pigs. Infection in the pig world is quite common, as Porcine circovirus 2 can affects piglets shortly after they are weaned (cessation of nursing). Delayed growth, body tissue wasting, and death are associated with infection in pigs. Many species of birds can be infected, as circovirus causes infectious anemia in chickens, and beak and feather diseases in psittacines (budgies, cockatiels, finches, parakeets, and parrots). The canine variety of cirvovirus, CaCV-1 strain NY214, is closer in genetics to the virus infecting pigs than it is to the bird virus. It was first discovered during a 2012 Columbia University study (Complete Genome Sequence of the First Canine Circovirus). It was then diagnosed in a dog suffering from diarrhea and hematemeis (vomit containing blood) that had been brought in for evaluation at the University of California, Davis (UC Davis) Veterinary Medical Teaching Hospital. The virus was also discovered in the feces of 14 of 204 dogs that were not suffering from digestive tract upset; a finding that shows it can be present and not cause illness. What Are Common Clinical Signs of Circovirus Infection? Besides the diarrhea and vomit as mentioned above, other clinical signs include: - Decreased appetite (anorexia) - Decreased water consumption - Lethargy (depression, decreased moving, etc.) - Delayed capillary refill time (the time it takes for blood to refill the gums after a finger presses out the blood. It should be < 2 seconds: try it on your pooch) - Pale pink mucous membranes (gums) and tongue - Vasculitis (inflammation of blood vessels, which can manifest in skin lesions) As there are many other causes of the same clinical signs in dogs, it is important that veterinarians don’t immediately jump to the diagnosis of canine circovirus and consider all potential options (bacterial, parasitic, and other viral infections, toxicity, foreign body consumption, cancer, etc.) when performing their clinical workup (blood, fecal, urine, other testing). How is Circovirus Spread? Circovirus is commonly spread through body fluid secretions, including those from the digestive and respiratory tract, such as saliva, vomit, diarrhea, and nasal secretions. Infectious organisms affecting our canine (and feline) companions have a tendency to emerge in areas where populations of susceptible hosts congregate. Therefore, shelters, day care facilities, dog parks, breeding facilities, and veterinary hospitals are all sites where bacteria, fungi, parasites, and viruses can be transmitted from one animal to another. How is the Diagnosis of Circovirus Achieved? Circovirus diagnosis is achieved based on a PCR (Polyerase Chain Reaction) test on body tissues. If deemed appropriate, a veterinarian can perform canine circovirus testing through the MSU Diagnostic Center for Population and Animal Health. Collecting data about this emerging disease is important, so please consent to testing for circovirus should your veterinarian deem it appropriate based on clinical suspicion. Can Circovirus Infection Be Prevented? There is currently no vaccination available for canine circovirus. Unfortunately, development of vaccinations takes years and the diagnosis of circovirus in dogs is a recent event. Realistically, there may never be a vaccination available to prevent dogs from being infected with circovirus. Therefore, it’s vital that owners focus on preventing infection with microorganisms instead of treatment. Prevention comes down to using common sense and caution when planning your dog’s interaction with other canines. Locations where dogs congregate can be hot zones for infection, as bacteria, viruses, and parasites are exchanged via direct contact or from body secretions. As a result, having your dog frequently spend time in these places isn’t really in his best interest from a standpoint of health. Dogs that do socialize with others of their and other species should be vaccinated according to the recommendations of their veterinarians and have frequent physical examination and diagnostic testing to monitor for the development of disease that may not be apparent to the naked eye. Can Circovirus Be Spread to Humans? Currently, there have been no reports of humans being infected with circovirus. Yet, as there are many zoonotic diseases (those that transmit from one species to another), including some which have origins in swine and birds, the potential exists for humans to be infected with circovirus. Infection with the swine variety of circovirus is more likely than with the dog variety, as humans and pigs are closer in their genetic relation than dogs (see Human to Pig Genome Comparison Complete). I wrote about zoonotic transmission of a virus containing bird and pig genetic material in the following article: Swine Flu Pandemic Over But H1N1 Hybrid Virus Emerges You can focus on disease prevention by: - Washing your hands frequently with soap and water - Preventing your dog from licking your face or areas of the body having mucous membranes, such as the nose or eyes - Scheduling a wellness examination with your veterinarian every 6 to 12 months - Limiting your dog’s access to areas well-traveled by other canines Dr. Patrick Mahaney
<urn:uuid:ddb234d2-7da0-4b1c-a1be-56e54e80ed90>
seed
•Incidence is modelled using HMIS data in Namibia.•Assembled data include parasitological and clinical diagnosed case. The clinical cases are adjusted using slide positivity rates at each facility.•Denominator catchment population adjusted for probability for seeking treatment when sick with fever.•Bayesian spatio-temporal model was implemented at facility level, adjusting for missing data using INLA.•Spatio-temporal monthly maps of incidence are produced and a mean prediction for 2009 for Namibia. As malaria transmission declines, it becomes increasingly important to monitor changes in malaria incidence rather than prevalence. Here, a spatio-temporal model was used to identify constituencies with high malaria incidence to guide malaria control. Malaria cases were assembled across all age groups along with several environmental covariates. A Bayesian conditional-autoregressive model was used to model the spatial and temporal variation of incidence after adjusting for test positivity rates and health facility utilisation. Of the 144,744 malaria cases recorded in Namibia in 2009, 134,851 were suspected and 9893 were parasitologically confirmed. The mean annual incidence based on the Bayesian model predictions was 13 cases per 1000 population with the highest incidence predicted for constituencies bordering Angola and Zambia. The smoothed maps of incidence highlight trends in disease incidence. For Namibia, the 2009 maps provide a baseline for monitoring the targets of pre-elimination. ACD, active case detection; CAR, conditional auto-regressive; CPO, conditional predictive ordinate; DIC, deviance information criterion; ESRI, Environmental System Research Institute; EVI, enhanced vegetation index; GF, Gaussian field; GIS, geographic information system; GMRF, Gaussian markov random field; GPS, global positioning system; GRUMP, Global Rural and Urban Mapping Project; HMIS, Health Management Information System; INLA, Integrated Nested Laplace Approximation; JAXA, Japan Aerospace Exploration Agency; MAUP, Modifiable Areal Unit Problem; MCMC, Markov Chain Monte Carlo; MODIS, MODerate-resolution Imaging Spectro-radiometer; MoHSS, Ministry of Health and Social Services; NASA, National Aeronautics and Space Administration; NVBDCP, National Vector-Borne and Disease Control Programme; PCD, passive case detection; PHS, public health sector; RDT, Rapid Diagnostic Test; SPA, Service Provision Assessments; TRMM, Tropical Rainfall Measuring Mission; TSI, temperature suitability index; WHO, World Health Organisation; ZIP, Zero-Inflated Poisson; Namibia; Malaria; Spatio-temporal; Conditional-autoregressive Substantial development assistance has been directed towards reducing the high malaria burden in Malawi over the past decade. We assessed changes in transmission over this period of malaria control scale-up by compiling community Plasmodium falciparum rate (PfPR) data during 2000–2011 and used model-based geostatistical methods to predict mean PfPR2–10 in 2000, 2005, and 2010. In addition, we calculated population-adjusted prevalences and populations at risk by district to inform malaria control program priority setting. The national population-adjusted PfPR2–10 was 37% in 2010, and we found no evidence of change over this period of scale-up. The entire population of Malawi is under meso-endemic transmission risk, with those in districts along the shore of Lake Malawi and Shire River Valley under highest risk. The lack of change in prevalence confirms modeling predictions that when compared with lower transmission, prevalence reductions in high transmission settings require greater investment and longer time scales. The quantification of parasite movements can provide valuable information for control strategy planning across all transmission intensities. Mobile parasite carrying individuals can instigate transmission in receptive areas, spread drug resistant strains and reduce the effectiveness of control strategies. The identification of mobile demographic groups, their routes of travel and how these movements connect differing transmission zones, potentially enables limited resources for interventions to be efficiently targeted over space, time and populations. National population censuses and household surveys provide individual-level migration, travel, and other data relevant for understanding malaria movement patterns. Together with existing spatially referenced malaria data and mathematical models, network analysis techniques were used to quantify the demographics of human and malaria movement patterns in Kenya, Uganda and Tanzania. Movement networks were developed based on connectivity and magnitudes of flow within each country and compared to assess relative differences between regions and demographic groups. Additional malaria-relevant characteristics, such as short-term travel and bed net use, were also examined. Patterns of human and malaria movements varied between demographic groups, within country regions and between countries. Migration rates were highest in 20–30 year olds in all three countries, but when accounting for malaria prevalence, movements in the 10–20 year age group became more important. Different age and sex groups also exhibited substantial variations in terms of the most likely sources, sinks and routes of migration and malaria movement, as well as risk factors for infection, such as short-term travel and bed net use. Census and survey data, together with spatially referenced malaria data, GIS and network analysis tools, can be valuable for identifying, mapping and quantifying regional connectivities and the mobility of different demographic groups. Demographically-stratified HPM and malaria movement estimates can provide quantitative evidence to inform the design of more efficient intervention and surveillance strategies that are targeted to specific regions and population groups. Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies specific importation routes that contribute to malaria epidemiology on regional spatial scales. To evaluate barriers preventing pregnant women from using insecticide-treated nets (ITN) and intermittent presumptive treatment (IPT) with sulphadoxine-pyrimethamine (SP) 5 years after the launch of the national malaria strategy promoting these measures in Kenya. All women aged 15–49 years were interviewed during a community survey in four districts between December 2006 and January 2007. Women pregnant in the last 12 months were asked about their age, parity, education, use of nets, ITN, antenatal care (ANC) services and sulphadoxine-pyrimethamine (SP) (overall and for IPT) during pregnancy. Homestead assets were recorded and used to develop a wealth index. Travel time to ANC clinics was computed using a geographic information system algorithm. Predictors of net and IPT use were defined using multivariate logistic regression. Overall 68% of pregnant women used a net; 52% used an ITN; 84% attended an ANC clinic at least once and 74% at least twice. Fifty-three percent of women took at least one dose of IPT-SP, however only 22% took two or more doses. Women from the least poor homesteads (OR = 2.53, 1.36–4.68) and those who used IPT services (OR = 1.73, 1.24–2.42) were more likely to sleep under any net. Women who used IPT were more likely to use ITNs (OR = 1.35, 1.03–1.77), while those who lived more than an hour from an ANC clinic were less likely (OR = 0.61, 0.46–0.81) to use ITN. Women with formal education (1.47, 1.01–2.17) and those who used ITN (OR: 1.68, 1.20–2.36) were more likely to have received at least one dose of IPT-SP. Although the use of ITN had increased 10-fold and the use of IPT fourfold since last measured in 2001, coverage remains low. Provider practices in the delivery of protective measures against malaria must change, supported by community awareness campaigns on the importance of mothers’ use of IPT. malaria; pregnancy; antenatal care; intermittent presumptive treatment; sulphadoxine-pyrimethamine; insecticide-treated nets; Kenya The Millennium Development Goals (MDGs) have prompted an expansion in approaches to deriving health metrics to measure progress toward their achievement. Accurate measurements should take into account the high degrees of spatial heterogeneity in health risks across countries, and this has prompted the development of sophisticated cartographic techniques for mapping and modeling risks. Conversion of these risks to relevant population-based metrics requires equally detailed information on the spatial distribution and attributes of the denominator populations. However, spatial information on age and sex composition over large areas is lacking, prompting many influential studies that have rigorously accounted for health risk heterogeneities to overlook the substantial demographic variations that exist subnationally and merely apply national-level adjustments. Here we outline the development of high resolution age- and sex-structured spatial population datasets for Africa in 2000-2015 built from over a million measurements from more than 20,000 subnational units, increasing input data detail from previous studies by over 400-fold. We analyze the large spatial variations seen within countries and across the continent for key MDG indicator groups, focusing on children under 5 and women of childbearing age, and find that substantial differences in health and development indicators can result through using only national level statistics, compared to accounting for subnational variation. Progress toward meeting the MDGs will be measured through national-level indicators that mask substantial inequalities and heterogeneities across nations. Cartographic approaches are providing opportunities for quantitative assessments of these inequalities and the targeting of interventions, but demographic spatial datasets to support such efforts remain reliant on coarse and outdated input data for accurately locating risk groups. We have shown here that sufficient data exist to map the distribution of key vulnerable groups, and that doing so has substantial impacts on derived metrics through accounting for spatial demographic heterogeneities that exist within nations across Africa. Population; Demography; Mapping; Millenium development goals Historical evidence of the levels of intervention scale up and its relationships to changing malaria risks provides important contextual information for current ambitions to eliminate malaria in various regions of Africa today. Community-based Plasmodium falciparum prevalence data from 3,260 geo-coded time-space locations between 1969 and 1992 were assembled from archives covering an examination of 230,174 individuals located in northern Namibia. These data were standardized the age-range 2 to less than 10 years and used within a Bayesian model-based geo-statistical framework to examine the changes of malaria risk in the years 1969, 1974, 1979, 1984 and 1989 at 5×5 km spatial resolution. This changing risk was described against rainfall seasons and the wide-scale use of indoor-residual house-spraying and mass drug administration. Most areas of Northern Namibia experienced low intensity transmission during a ten-year period of wide-scale control activities between 1969 and 1979. As control efforts waned, flooding occurred, drug resistance emerged and the war for independence intensified the spatial extent of moderate-to-high malaria transmission expanded reaching a peak in the late 1980s. Targeting vectors and parasite in northern Namibia was likely to have successfully sustained a situation of low intensity transmission, but unraveled quickly to a peak of transmission intensity following a sequence of events by the early 1990s. Many patients with suspected malaria in sub-Saharan Africa seek treatment from private providers, but this sector suffers from sub-standard medicine dispensing practices. To improve the quality of care received for presumptive malaria from the highly accessed private retail sector in western Kenya, subsidized pre-packaged artemether-lumefantrine (AL) was provided to private retailers, together with a one day training for retail staff on malaria diagnosis and treatment, job aids and community engagement activities. The intervention was assessed using a cluster-randomized, controlled design. Provider and mystery-shopper cross-sectional surveys were conducted at baseline and eight months post-intervention to assess provider practices. Data were analysed based on cluster-level summaries, comparing control and intervention arms. On average, 564 retail outlets were interviewed per year. At follow-up, 43% of respondents reported that at least one staff member had attended the training in the intervention arm. The intervention significantly increased the percentage of providers knowing the first line treatment for uncomplicated malaria by 24.2% points (confidence interval (CI): 14.8%, 33.6%; adjusted p=0.0001); the percentage of outlets stocking AL by 31.7% points (CI: 22.0%, 41.3%; adjusted p=0.0001); and the percentage of providers prescribing AL for presumptive malaria by 23.6% points (CI: 18.7%, 28.6%; adjusted p=0.0001). Generally outlets that received training and job aids performed better than those receiving one or none of these intervention components. Overall, subsidizing ACT and retailer training can significantly increase the percentage of outlets stocking and selling AL for the presumptive treatment of malaria, but further research is needed on strategies to improve the provision of counselling advice to retail customers. Community case-management of malaria; Artemisinin-based combination therapy; Antimalarial subsidy programme; Drug retailers Rational decision making on malaria control depends on an understanding of the epidemiological risks and control measures. National Malaria Control Programmes across Africa have access to a range of state-of-the-art malaria risk mapping products that might serve their decision-making needs. The use of cartography in planning malaria control has never been methodically reviewed. Materials and Methods An audit of the risk maps used by NMCPs in 47 malaria endemic countries in Africa was undertaken by examining the most recent national malaria strategies, monitoring and evaluation plans, malaria programme reviews and applications submitted to the Global Fund. The types of maps presented and how they have been used to define priorities for investment and control was investigated. 91% of endemic countries in Africa have defined malaria risk at sub-national levels using at least one risk map. The range of risk maps varies from maps based on suitability of climate for transmission; predicted malaria seasons and temperature/altitude limitations, to representations of clinical data and modelled parasite prevalence. The choice of maps is influenced by the source of the information. Maps developed using national data through in-country research partnerships have greater utility than more readily accessible web-based options developed without inputs from national control programmes. Although almost all countries have stratification maps, only a few use them to guide decisions on the selection of interventions allocation of resources for malaria control. The way information on the epidemiology of malaria is presented and used needs to be addressed to ensure evidence-based added value in planning control. The science on modelled impact of interventions must be integrated into new mapping products to allow a translation of risk into rational decision making for malaria control. As overseas and domestic funding diminishes, strategic planning will be necessary to guide appropriate financing for malaria control. Malaria rapid diagnostic tests (RDTs) are known to yield false-positive results, and their use in epidemiologic surveys will overestimate infection prevalence and potentially hinder efficient targeting of interventions. To examine the consequences of using RDTs in school surveys, we compared three RDT brands used during a nationwide school survey in Kenya with expert microscopy and investigated the cost implications of using alternative diagnostic approaches in identifying localities with differing levels of infection. Overall, RDT sensitivity was 96.1% and specificity was 70.8%. In terms of classifying districts and schools according to prevalence categories, RDTs were most reliable for the < 1% and > 40% categories and least reliable in the 1–4.9% category. In low-prevalence settings, microscopy was the most expensive approach, and RDT results corrected by either microscopy or polymerase chain reaction were the cheapest. Use of polymerase chain reaction-corrected RDT results is recommended in school malaria surveys, especially in settings with low-to-moderate malaria transmission. Evidence shows that malaria risk maps are rarely tailored to address national control program ambitions. Here, we generate a malaria risk map adapted for malaria control in Sudan. Community Plasmodium falciparum parasite rate (PfPR) data from 2000 to 2010 were assembled and were standardized to 2–10 years of age (PfPR2–10). Space-time Bayesian geostatistical methods were used to generate a map of malaria risk for 2010. Surfaces of aridity, urbanization, irrigation schemes, and refugee camps were combined with the PfPR2–10 map to tailor the epidemiological stratification for appropriate intervention design. In 2010, a majority of the geographical area of the Sudan had risk of < 1% PfPR2–10. Areas of meso- and hyperendemic risk were located in the south. About 80% of Sudan’s population in 2011 was in the areas in the desert, urban centers, or where risk was < 1% PfPR2–10. Aggregated data suggest reducing risks in some high transmission areas since the 1960s. Understanding the historical, temporal changes of malaria risk following control efforts in Africa provides a unique insight into what has been and might be archived towards a long-term ambition of elimination on the continent. Here, we use archived published and unpublished material combined with biological constraints on transmission accompanied by a narrative on malaria control to document the changing incidence of malaria in Africa since earliest reports pre-second World War. One result is a more informed mapped definition of the changing margins of transmission in 1939, 1959, 1979, 1999 and 2009. Malaria rapid diagnostic tests (RDTs) are known to yield false-positive results, and their use in epidemiologic surveys will overestimate infection prevalence and potentially hinder efficient targeting of interventions. To examine the consequences of using RDTs in school surveys, we compared three RDT brands used during a nationwide school survey in Kenya with expert microscopy and investigated the cost implications of using alternative diagnostic approaches in identifying localities with differing levels of infection. Overall, RDT sensitivity was 96.1% and specificity was 70.8%. In terms of classifying districts and schools according to prevalence categories, RDTs were most reliable for the < 1% and > 40% categories and least reliable in the 1–4.9% category. In low-prevalence settings, microscopy was the most expensive approach, and RDT results corrected by either microscopy or polymerase chain reaction were the cheapest. Use of polymerase chain reaction–corrected RDT results is recommended in school malaria surveys, especially in settings with low-to-moderate malaria transmission. Evidence shows that malaria risk maps are rarely tailored to address national control program ambitions. Here, we generate a malaria risk map adapted for malaria control in Sudan. Community Plasmodium falciparum parasite rate (PfPR) data from 2000 to 2010 were assembled and were standardized to 2–10 years of age (PfPR2–10). Space-time Bayesian geostatistical methods were used to generate a map of malaria risk for 2010. Surfaces of aridity, urbanization, irrigation schemes, and refugee camps were combined with the PfPR2–10 map to tailor the epidemiological stratification for appropriate intervention design. In 2010, a majority of the geographical area of the Sudan had risk of < 1% PfPR2–10. Areas of meso- and hyperendemic risk were located in the south. About 80% of Sudan's population in 2011 was in the areas in the desert, urban centers, or where risk was < 1% PfPR2–10. Aggregated data suggest reducing risks in some high transmission areas since the 1960s. Recent increases in funding for malaria control have led to the reduction in transmission in many malaria endemic countries, prompting the national control programmes of 36 malaria endemic countries to set elimination targets. Accounting for human population movement (HPM) in planning for control, elimination and post-elimination surveillance is important, as evidenced by previous elimination attempts that were undermined by the reintroduction of malaria through HPM. Strategic control and elimination planning, therefore, requires quantitative information on HPM patterns and the translation of these into parasite dispersion. HPM patterns and the risk of malaria vary substantially across spatial and temporal scales, demographic and socioeconomic sub-groups, and motivation for travel, so multiple data sets are likely required for quantification of movement. While existing studies based on mobile phone call record data combined with malaria transmission maps have begun to address within-country HPM patterns, other aspects remain poorly quantified despite their importance in accurately gauging malaria movement patterns and building control and detection strategies, such as cross-border HPM, demographic and socioeconomic stratification of HPM patterns, forms of transport, personal malaria protection and other factors that modify malaria risk. A wealth of data exist to aid filling these gaps, which, when combined with spatial data on transport infrastructure, traffic and malaria transmission, can answer relevant questions to guide strategic planning. This review aims to (i) discuss relevant types of HPM across spatial and temporal scales, (ii) document where datasets exist to quantify HPM, (iii) highlight where data gaps remain and (iv) briefly put forward methods for integrating these datasets in a Geographic Information System (GIS) framework for analysing and modelling human population and Plasmodium falciparum malaria infection movements. This paper presents an appraisal of satellite imagery types and texture measures for identifying and delineating settlements in four Districts of Kenya chosen to represent the variation in human ecology across the country. Landsat Thematic Mapper (TM) and Japanese Earth Resources Satellite-1 (JERS-1) synthetic aperture radar (SAR) imagery of the four districts were obtained and supervised per-pixel classifications of image combinations tested for their efficacy at settlement delineation. Additional data layers including human population census data, land cover, and locations of medical facilities, villages, schools and market centres were used for training site identification and validation. For each district, the most accurate approach was determined through the best correspondence with known settlement and non-settlement pixels. The resulting settlement maps will be used in combination with census data to produce medium spatial resolution population maps for improved public health planning in Kenya. Landsat TM; JERS-1 SAR; texture; settlement mapping; Kenya; public health The spatial distribution of populations and settlements across a country and their interconnectivity and accessibility from urban areas are important for delivering healthcare, distributing resources and economic development. However, existing spatially explicit population data across Africa are generally based on outdated, low resolution input demographic data, and provide insufficient detail to quantify rural settlement patterns and, thus, accurately measure population concentration and accessibility. Here we outline approaches to developing a new high resolution population distribution dataset for Africa and analyse rural accessibility to population centers. Contemporary population count data were combined with detailed satellite-derived settlement extents to map population distributions across Africa at a finer spatial resolution than ever before. Substantial heterogeneity in settlement patterns, population concentration and spatial accessibility to major population centres is exhibited across the continent. In Africa, 90% of the population is concentrated in less than 21% of the land surface and the average per-person travel time to settlements of more than 50,000 inhabitants is around 3.5 hours, with Central and East Africa displaying the longest average travel times. The analyses highlight large inequities in access, the isolation of many rural populations and the challenges that exist between countries and regions in providing access to services. The datasets presented are freely available as part of the AfriPop project, providing an evidence base for guiding strategic decisions. Health care utilization is affected by several factors including geographic accessibility. Empirical data on utilization of health facilities is important to understanding geographic accessibility and defining health facility catchments at a national level. Accurately defining catchment population improves the analysis of gaps in access, commodity needs and interpretation of disease incidence. Here, empirical household survey data on treatment seeking for fever were used to model the utilisation of public health facilities and define their catchment areas and populations in northern Namibia. This study uses data from the Malaria Indicator Survey (MIS) of 2009 on treatment seeking for fever among children under the age of five years to characterize facility utilisation. Probability of attendance of public health facilities for fever treatment was modelled against a theoretical surface of travel times using a three parameter logistic model. The fitted model was then applied to a population surface to predict the number of children likely to use a public health facility during an episode of fever in northern Namibia. Overall, from the MIS survey, the prevalence of fever among children was 17.6% CI [16.0-19.1] (401 of 2,283 children) while public health facility attendance for fever was 51.1%, [95%CI: 46.2-56.0]. The coefficients of the logistic model of travel time against fever treatment at public health facilities were all significant (p < 0.001). From this model, probability of facility attendance remained relatively high up to 180 minutes (3 hours) and thereafter decreased steadily. Total public health facility catchment population of children under the age five was estimated to be 162,286 in northern Namibia with an estimated fever burden of 24,830 children. Of the estimated fevers, 8,021 (32.3%) were within 30 minutes of travel time to the nearest health facility while 14,902 (60.0%) were within 1 hour. This study demonstrates the potential of routine household surveys to empirically model health care utilisation for the treatment of childhood fever and define catchment populations enhancing the possibilities of accurate commodity needs assessment and calculation of disease incidence. These methods could be extended to other African countries where detailed mapping of health facilities exists. Namibia; Fevers; Treatment; Spatial; Utilisation; Malaria On the 4th July 2002 a leading national newspaper in Kenya, the Daily Nation, ran the headline ‘Minister sounds alert on malaria’ in an article declaring the onset of epidemics in the highlands of western Kenya. There followed frequent media coverage with quotes from district leaders on the numbers of deaths, and editorials on the failure of the national malaria control strategy. The Ministry of Health made immediate and radical changes to national policy on treatment costs in the highlands by suspending cost-sharing. Development partners and non-governmental organisations also responded with a large increase in the distribution of commodities (approximately US$ 500 000) to support preventative strategies across the western highland region. What was conspicuous by its absence was any obvious effort to predict the epidemics in advance of press coverage. There has been considerable debate on the existence of trends in climate in the highlands of East Africa and hypotheses about their potential effect on the trends in malaria in the region. We apply a new robust trend test to mean temperature time series data from three editions of the University of East Anglia's Climatic Research Unit database (CRU TS) for several relevant locations. We find significant trends in the data extracted from newer editions of the database but not in the older version for periods ending in 1996. The trends in the newer data are even more significant when post-1996 data are added to the samples. We also test for trends in the data from the Kericho meteorological station prepared by Omumbo et al. We find no significant trend in the 1979-1995 period but a highly significant trend in the full 1979-2009 sample. However, although the malaria cases observed at Kericho, Kenya rose during a period of resurgent epidemics (1994-2002) they have since returned to a low level. A large assembly of parasite rate surveys from the region, stratified by altitude, show that this decrease in malaria prevalence is not limited to Kericho. An understanding of spatial patterns of health facility use allows a more informed approach to the modelling of catchment populations. In the absence of patient use data, an intuitive and commonly used approach to the delineation of facility catchment areas is Thiessen polygons. This study presents a series of methods by which the validity of these assumptions can be tested directly and hence the suitability of a Thiessen polygon catchment model explicitly assessed. These methods are applied to paediatric out-patient origin data from a sample of 81 government health facilities in four districts of Kenya. A geographical information system was used to predict the location of the catchment boundary along a transect between each pair of neighbouring facilities based on patient choice patterns. The mean location of boundaries between facilities of different type was found to be significantly displaced from the Thiessen boundary towards the lower-order facility. The affect of distance on within-catchment utilization rate was assessed by using exclusion buffers to remove the effect of neighbouring facilities. Utilization rate was found to exhibit a slight but steady decrease with distance up to 6 km from a facility. The accuracy of the future modelling of unsampled facility catchments can be increased by the incorporation of these trends. Health services; Fevers; Thiessen polygons; Utilization rate; Kenya Our aim was to assess whether a combination of seasonal climate forecasts, monitoring of meteorological conditions, and early detection of cases could have helped to prevent the 2002 malaria emergency in the highlands of western Kenya. Seasonal climate forecasts did not anticipate the heavy rainfall. Rainfall data gave timely and reliable early warnings; but monthly surveillance of malaria out-patients gave no effective alarm, though it did help to confirm that normal rainfall conditions in Kisii Central and Gucha led to typical resurgent outbreaks whereas exceptional rainfall in Nandi and Kericho led to true malaria epidemics. Management of malaria in the highlands, including improved planning for the annual resurgent outbreak, augmented by simple central nationwide early warning, represents a feasible strategy for increasing epidemic preparedness in Kenya. The aim of this review was to use geographic information systems in combination with historical maps to quantify the anthropogenic impact on the distribution of malaria in the 20th century. The nature of the cartographic record enabled global and regional patterns in the spatial limits of malaria to be investigated at six intervals between 1900 and 2002. Contemporaneous population surfaces also allowed changes in the numbers of people living in areas of malaria risk to be quantified. These data showed that during the past century, despite human activities reducing by half the land area supporting malaria, demographic changes resulted in a 2 billion increase in the total population exposed to malaria risk. Furthermore, stratifying the present day malaria extent by endemicity class and examining regional differences highlighted that nearly 1 billion people are exposed to hypoendemic and mesoendemic malaria in southeast Asia. We further concluded that some distortion in estimates of the regional distribution of malaria burden could have resulted from different methods used to calculate burden in Africa. Crude estimates of the national prevalence of Plasmodium falciparum infection based on endemicity maps corroborate these assertions. Finally, population projections for 2010 were used to investigate the potential effect of future demographic changes. These indicated that although population growth will not substantially change the regional distribution of people at malaria risk, around 400 million births will occur within the boundary of current distribution of malaria by 2010: the date by which the Roll Back Malaria initiative is challenged to halve the world’s malaria burden. Interest in mapping the global distribution of malaria is motivated by a need to define populations at risk for appropriate resource allocation1,2 and to provide a robust framework for evaluating its global economic impact3,4. Comparison of older5–7 and more recent1,4 malaria maps shows how the disease has been geographically restricted, but it remains entrenched in poor areas of the world with climates suitable for transmission. Here we provide an empirical approach to estimating the number of clinical events caused by Plasmodium falciparum worldwide, by using a combination of epidemiological, geographical and demographic data. We estimate that there were 515 (range 300–660) million episodes of clinical P. falciparum malaria in 2002. These global estimates are up to 50% higher than those reported by the World Health Organization (WHO) and 200% higher for areas outside Africa, reflecting the WHO’s reliance upon passive national reporting for these countries. Without an informed understanding of the cartography of malaria risk, the global extent of clinical disease caused by P. falciparum will continue to be underestimated. Abdisalan Noor discusses new research in PLoS Medicine that used model-based geostatistics to investigate the risks of anemia among preschool-aged children in West Africa that were attributable to malnutrition, malaria, and helminth infections.
<urn:uuid:451fd301-c6f8-4214-8ef5-1398681e1497>
seed
Bedsores — also called pressure sores or pressure ulcers — are injuries to skin and underlying tissue resulting from prolonged pressure on the skin. Bedsores most often develop on skin that covers bony areas of the body, such as the heels, ankles, hips and tailbone. People most at risk of bedsores are those with a medical condition that limits their ability to change positions, requires them to use a wheelchair or confines them to a bed for a long time. Bedsores can develop quickly and are often difficult to treat. Several things can help prevent some bedsores and help with healing. Bedsores fall into one of four stages based on their severity. The National Pressure Ulcer Advisory Panel, a professional organization that promotes the prevention and treatment of pressure ulcers, defines each stage as follows: The beginning stage of a pressure sore has the following characteristics: - The skin is not broken. - The skin appears red on people with lighter skin color, and the skin doesn't briefly lighten (blanch) when touched. - On people with darker skin, the skin may show discoloration, and it doesn't blanch when touched. - The site may be tender, painful, firm, soft, warm or cool compared with the surrounding skin. At stage II: - The outer layer of skin (epidermis) and part of the underlying layer of skin (dermis) is damaged or lost. - The wound may be shallow and pinkish or red. - The wound may look like a fluid-filled blister or a ruptured blister. At stage III, the ulcer is a deep wound: - The loss of skin usually exposes some fat. - The ulcer looks crater-like. - The bottom of the wound may have some yellowish dead tissue. - The damage may extend beyond the primary wound below layers of healthy skin. A stage IV ulcer shows large-scale loss of tissue: - The wound may expose muscle, bone or tendons. - The bottom of the wound likely contains dead tissue that's yellowish or dark and crusty. - The damage often extends beyond the primary wound below layers of healthy skin. A pressure ulcer is considered unstageable if its surface is covered with yellow, brown, black or dead tissue. It’s not possible to see how deep the wound is. Deep tissue injury A deep tissue injury may have the following characteristics: - The skin is purple or maroon but the skin is not broken. - A blood-filled blister is present. - The area is painful, firm or mushy. - The area is warm or cool compared with the surrounding skin. - In people with darker skin, a shiny patch or a change in skin tone may develop. Common sites of pressure sores For people who use a wheelchair, pressure sores often occur on skin over the following sites: - Tailbone or buttocks - Shoulder blades and spine - Backs of arms and legs where they rest against the chair For people who are confined to a bed, common sites include the following: - Back or sides of the head - Rim of the ears - Shoulders or shoulder blades - Hip, lower back or tailbone - Heels, ankles and skin behind the knees When to see a doctor If you notice early signs or symptoms of a pressure ulcer, change your position to relieve the pressure on the area. If you don't see improvement in 24 to 48 hours, contact your doctor. Seek immediate medical care if you show signs of infection, such as fever, drainage or a foul odor from a sore, or increased heat and redness in the surrounding skin. Bedsores are caused by pressure against the skin that limits blood flow to the skin and nearby tissues. Other factors related to limited mobility can make the skin vulnerable to damage and contribute to the development of pressure sores. Three primary contributing factors are: Sustained pressure. When your skin and the underlying tissues are trapped between bone and a surface such as a wheelchair or a bed, the pressure may be greater than the pressure of the blood flowing in the tiny vessels (capillaries) that deliver oxygen and other nutrients to tissues. Without these essential nutrients, skin cells and tissues are damaged and may eventually die. This kind of pressure tends to happen in areas that aren't well-padded with muscle or fat and that lie over a bone, such as your spine, tailbone, shoulder blades, hips, heels and elbows. - Friction. Friction is the resistance to motion. It may occur when the skin is dragged across a surface, such as when you change position or a care provider moves you. The friction may be even greater if the skin is moist. Friction may make fragile skin more vulnerable to injury. - Shear. Shear occurs when two surfaces move in the opposite direction. For example, when a hospital bed is elevated at the head, you can slide down in bed. As the tailbone moves down, the skin over the bone may stay in place — essentially pulling in the opposite direction. This motion may injure tissue and blood vessels, making the site more vulnerable to damage from sustained pressure. People are at risk of developing pressure sores if they have difficulty moving and are unable to easily change position while seated or in bed. Immobility may be due to: - Generally poor health or weakness - Injury or illness that requires bed rest or wheelchair use - Recovery after surgery Other factors that increase the risk of pressure sores include: - Age. The skin of older adults is generally more fragile, thinner, less elastic and drier than the skin of younger adults. Also, older adults usually produce new skin cells more slowly. These factors make skin vulnerable to damage. - Lack of sensory perception. Spinal cord injuries, neurological disorders and other conditions can result in a loss of sensation. An inability to feel pain or discomfort can result in not being aware of bedsores or the need to change position. - Weight loss. Weight loss is common during prolonged illnesses, and muscle atrophy and wasting are common in people with paralysis. The loss of fat and muscle results in less cushioning between bones and a bed or a wheelchair. - Poor nutrition and hydration. People need enough fluids, calories, protein, vitamins and minerals in their daily diet to maintain healthy skin and prevent the breakdown of tissues. - Excess moisture or dryness. Skin that is moist from sweat or lack of bladder control is more likely to be injured and increases the friction between the skin and clothing or bedding. Very dry skin increases friction as well. - Bowel incontinence. Bacteria from fecal matter can cause serious local infections and lead to life-threatening infections affecting the whole body. - Medical conditions affecting blood flow. Health problems that can affect blood flow, such as diabetes and vascular disease, increase the risk of tissue damage. - Smoking. Smoking reduces blood flow and limits the amount of oxygen in the blood. Smokers tend to develop more-severe wounds, and their wounds heal more slowly. - Limited alertness. People whose mental awareness is lessened by disease, trauma or medications may be unable to take the actions needed to prevent or care for pressure sores. - Muscle spasms. People who have frequent muscle spasms or other involuntary muscle movement may be at increased risk of pressure sores from frequent friction and shearing. Complications of pressure ulcers include: - Sepsis. Sepsis occurs when bacteria enter the bloodstream through broken skin and spread throughout the body. It's a rapidly progressing, life-threatening condition that can cause organ failure. - Cellulitis. Cellulitis is an infection of the skin and connected soft tissues. It can cause severe pain, redness and swelling. People with nerve damage often do not feel pain with this condition. Cellulitis can lead to life-threatening complications. - Bone and joint infections. An infection from a pressure sore can burrow into joints and bones. Joint infections (septic arthritis) can damage cartilage and tissue. Bone infections (osteomyelitis) may reduce the function of joints and limbs. Such infections can lead to life-threatening complications. - Cancer. Another complication is the development of a type of squamous cell carcinoma that develops in chronic, nonhealing wounds (Marjolin ulcer). This type of cancer is aggressive and usually requires surgery. Evaluating a bedsore To evaluate a bedsore, your doctor will: - Determine the size and depth of the ulcer - Check for bleeding, fluids or debris in the wound that can indicate severe infection - Try to detect odors indicating an infection or dead tissue - Check the area around the wound for signs of spreading tissue damage or infection - Check for other pressure sores on the body Questions from the doctor - When did the pressure sore first appear? - What is the degree of pain? - Have you had pressure sores in the past? - How were they managed, and what was the outcome of treatment? - What kind of care assistance is available to you? - What is your routine for changing positions? - What medical conditions have you been diagnosed with, and what is your current treatment? - What is your normal daily diet? - How much water and other fluids do you drink each day? Your doctor may order the following tests: - Blood tests to check your health - Tissue cultures to diagnose a bacterial or fungal infection in a wound that doesn't heal with treatment or is already at stage IV - Tissue cultures to check for cancerous tissue in a chronic, nonhealing wound Stage I and II bedsores usually heal within several weeks to months with conservative care of the wound and ongoing, appropriate general care. Stage III and IV bedsores are more difficult to treat. Addressing the many aspects of wound care usually requires a multidisciplinary approach. Members of your care team may include: - A primary care physician who oversees the treatment plan - A physician specializing in wound care - Nurses or medical assistants who provide both care and education for managing wounds - A social worker who helps you or your family access appropriate resources and addresses emotional concerns related to long-term recovery - A physical therapist who helps with improving mobility - A dietitian who monitors your nutritional needs and recommends an appropriate diet - A neurosurgeon, orthopedic surgeon or plastic surgeon, depending on whether you need surgery and what type The first step in treating a bedsore is reducing the pressure that caused it. Strategies include the following: Repositioning. If you have a pressure sore, you need to be repositioned regularly and placed in correct positions. If you use a wheelchair, try shifting your weight every 15 minutes or so. Ask for help with repositioning every hour. If you're confined to a bed, change positions every two hours. If you have enough upper body strength, try repositioning yourself using a device such as a trapeze bar. Caregivers can use bed linens to help lift and reposition you. This can reduce friction and shearing. - Using support surfaces. Use a mattress, bed and special cushions that help you lie in an appropriate position, relieve pressure on any sores and protect vulnerable skin. If you are in a wheelchair, use a cushion. Styles include foam, air filled and water filled. Select one that suits your condition, body type and mobility. Cleaning and dressing wounds Care that helps with healing of the wound includes the following: - Cleaning. It's essential to keep wounds clean to prevent infection. If the affected skin is not broken (a stage I wound), gently wash it with water and mild soap and pat dry. Clean open sores with a saltwater (saline) solution each time the dressing is changed. Applying dressings. A dressing promotes healing by keeping a wound moist, creating a barrier against infection and keeping the surrounding skin dry. Dressing choices include films, gauzes, gels, foams and treated coverings. A combination of dressings may be used. Your doctor selects a dressing based on a number of factors, such as the size and severity of the wound, the amount of discharge, and the ease of placing and removing the dressing. Removing damaged tissue To heal properly, wounds need to be free of damaged, dead or infected tissue. Removing this tissue (debridement) is accomplished with a number of methods, depending on the severity of the wound, your overall condition and the treatment goals. - Surgical debridement involves cutting away dead tissue. - Mechanical debridement loosens and removes wound debris. This may be done with a pressurized irrigation device, low-frequency mist ultrasound or specialized dressings. - Autolytic debridement enhances the body's natural process of using enzymes to break down dead tissue. This method may be used on smaller, uninfected wounds and involves special dressings to keep the wound moist and clean. - Enzymatic debridement involves applying chemical enzymes and appropriate dressings to break down dead tissue. Other interventions that may be used are: - Pain management. Pressure ulcers can be painful. Nonsteroidal anti-inflammatory drugs — such as ibuprofen (Motrin IB, Advil, others) and naproxen (Aleve, others) — may reduce pain. These may be very helpful before or after repositioning, debridement procedures and dressing changes. Topical pain medications also may be used during debridement and dressing changes. - Antibiotics. Infected pressure sores that aren't responding to other interventions may be treated with topical or oral antibiotics. - A healthy diet. To promote wound healing, your doctor or dietitian may recommend an increase in calories and fluids, a high-protein diet, and an increase in foods rich in vitamins and minerals. You may be advised to take dietary supplements, such as vitamin C and zinc. - Management of incontinence. Urinary or bowel incontinence may cause excess moisture and bacteria on the skin, increasing the risk of infection. Managing incontinence may help improve healing. Strategies include frequently scheduled help with urinating, frequent diaper changes, protective lotions on healthy skin, and urinary catheters or rectal tubes. - Muscle spasm relief. Spasm-related friction or shearing can cause or worsen bedsores. Muscle relaxants — such as diazepam (Valium), tizanidine (Zanaflex), dantrolene (Dantrium) and baclofen (Gablofen, Lioresal) — may inhibit muscle spasms and help sores heal. - Negative pressure therapy (vacuum-assisted closure, or VAC). This therapy uses a device that applies suction to a clean wound. It may help healing in some types of pressure sores. A pressure sore that fails to heal may require surgery. The goals of surgery include improving the hygiene and appearance of the sore, preventing or treating infection, reducing fluid loss through the wound, and lowering the risk of cancer. If you need surgery, the type of procedure depends mainly on the location of the wound and whether it has scar tissue from a previous operation. In general, most pressure sores are repaired using a pad of your muscle, skin or other tissue to cover the wound and cushion the affected bone (flap reconstruction). Treating and preventing pressure sores is demanding on you, your family members and caregivers. Issues that may need to be addressed by your doctor, the nursing staff and a social worker include the following: - Community services. A social worker can help identify community groups that provide services, education and support for people dealing with long-term caregiving or terminal illnesses. - End-of-life care. When someone is approaching death, physicians and nurses specializing in end-of-life care (palliative care) can help a patient and his or her family determine treatment goals. At this time, goals may include managing pain and providing comfort. - Residential care. People with limited mobility who live in residential or nursing care facilities are at increased risk of developing pressure sores. Family and friends of people living in these facilities can be advocates for the residents and work with nursing staff to ensure proper preventive care. Bedsores are easier to prevent than to treat, but that doesn't mean the process is easy or uncomplicated. And wounds may still develop with consistent, appropriate preventive care. Your doctor and other members of the care team can help develop a good strategy, whether it's personal care with at-home assistance, professional care in a hospital or some other situation. Position changes are key to preventing pressure sores. These changes need to be frequent, repositioning needs to avoid stress on the skin, and body positions need to minimize pressure on vulnerable areas. Other strategies include taking good care of your skin, maintaining good nutrition, quitting smoking and exercising daily. Repositioning in a wheelchair Consider the following recommendations related to repositioning in a wheelchair: - Shift your weight frequently. If you use a wheelchair, try shifting your weight about every 15 minutes. Ask for help with repositioning about once an hour. - Lift yourself, if possible. If you have enough upper body strength, do wheelchair pushups — raising your body off the seat by pushing on the arms of the chair. - Look into a specialty wheelchair. Some wheelchairs allow you to tilt them, which can relieve pressure. - Select a cushion that relieves pressure. Use cushions to relieve pressure and help ensure your body is well-positioned in the chair. Various cushions are available, such as foam, gel, water filled and air filled. A physical therapist can advise you on how to place them and their role in regular repositioning. Repositioning in a bed Consider the following recommendations when repositioning in a bed: - Reposition yourself frequently. Change your body position every two hours. - Look into devices to help you reposition. If you have enough upper body strength, try repositioning yourself using a device such as a trapeze bar. Caregivers can use bed linens to help lift and reposition you. This can reduce friction and shearing. - Try a specialized mattress. Use special cushions, a foam mattress pad, an air-filled mattress or a water-filled mattress to help with positioning, relieving pressure and protecting vulnerable areas. Your doctor or other care team members can recommend an appropriate mattress or surface. - Adjust the elevation of your bed. If your hospital bed can be elevated at the head, raise it no more than 30 degrees. This helps prevent shearing. - Use cushions to protect bony areas. Protect bony areas with proper positioning and cushioning. Rather than lying directly on a hip, lie at an angle with cushions supporting the back or front. You can also use cushions to relieve pressure against and between the knees and ankles. You can cushion or ''float'' your heels with cushions below the calves. Protecting and monitoring the condition of your skin is important for preventing pressure sores and identifying stage I sores early so that you can treat them before they worsen. - Clean the affected skin. Clean the skin with mild soap and warm water or a no-rinse cleanser. Gently pat dry. - Protect the skin. Use talcum powder to protect skin vulnerable to excess moisture. Apply lotion to dry skin. Change bedding and clothing frequently. Watch for buttons on the clothing and wrinkles in the bedding that irritate the skin. - Inspect the skin daily. Inspect the skin daily to identify vulnerable areas or early signs of pressure sores. You will probably need the help of a care provider to do a thorough skin inspection. If you have enough mobility, you may be able to do this with the help of a mirror. - Manage incontinence to keep the skin dry. If you have urinary or bowel incontinence, take steps to prevent exposing the skin to moisture and bacteria. Your care may include frequently scheduled help with urinating, frequent diaper changes, protective lotions on healthy skin, or urinary catheters or rectal tubes. Your doctor, a dietitian or other members of the care team can recommend nutritional changes to help improve the health of your skin. - Choose a healthy diet. You may need to increase the amount of calories, protein, vitamins and minerals in your diet. You may be advised to take dietary supplements, such as vitamin C and zinc. - Drink enough to keep the skin hydrated. Good hydration is important for maintaining healthy skin. Your care team can advise you on how much to drink and signs of poor hydration. These include decreased urine output, darker urine, dry or sticky mouth, thirst, dry skin, and constipation. - Ask for help if eating is difficult. If you have limited mobility or significant weakness, you may need help with eating in order to get adequate nutrition. Other important strategies that can help decrease the risk of bedsores include the following: - Quit smoking. If you smoke, quit. Talk to your doctor if you need help. - Stay active. Limited mobility is a key factor in causing pressure sores. Daily exercise matched to your abilities can help maintain healthy skin. A physical therapist can recommend an appropriate exercise program that improves blood flow, builds up vital muscle tissue, stimulates appetite and strengthens the body. Dec. 13, 2014 - Pressure ulcers. The Merck Manuals: The Merck Manual for Health Care Professionals. http://www.merck.com/mmpe/sec10/ch126/ch126a.html. Accessed Nov. 12, 2013. - Berlowitz D. Treatment of pressure ulcers. http://www.uptodate.com/home. Accessed Nov. 12, 2013. - Gestring M. Negative pressure wound therapy. http://www.uptodate.com/home. Accessed Nov. 12, 2013. - AskMayoExpert. Pressure ulcer. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013. - Ferri FF. Ferri's Clinical Advisor 2014: 5 Books in 1. Philadelphia, Pa.: Mosby Elsevier; 2014. https://www.clinicalkey.com. Accessed Nov. 12, 2013. - How to manage pressure ulcers. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013. - Berlowitz D. Prevention of pressure ulcers. http://www.uptodate.com/home. Accessed Nov. 13, 2013. - Tleyjeh I, et al. Infectious complications of pressure ulcers. http://www.uptodate.com/home. Accessed Nov. 13, 2013. - Lebwohl MG, et al. Treatment of Skin Disease: Comprehensive Therapeutic Strategies. 4th ed. Philadelphia, Pa.: Saunders Elsevier; 2014. http://www.clinicalkey.com. Accessed Nov. 13, 2013. - Neligan P. Plastic Surgery. 3rd ed. Philadelphia, Pa.: Saunders Elsevier; 2013. https://www.clinicalkey.com. Accessed Nov. 13, 2013. - Gupta S, et al. Optimal use of negative pressure wound therapy in treating pressure ulcers. International Wound Journal. 2012;9(suppl 1):8. - Lim JL, et al. Epidemiology and risk factors for cutaneous squamous cell carcinoma. http://www.uptodate.com/home. Accessed Nov. 15, 2013. - Abrams GM, et al. Chronic complications of spinal cord injury. http://www.uptodate.com/home. Accessed Nov. 18, 2013. - Mattison M, et al. Hospital management of older adults. http://www.uptodate.com/home. Accessed Nov. 18, 2013. - Pressure ulcer prevention. Rockville, Md.: Agency for Healthcare Research and Quality. http://www.guideline.gov/content.aspx?id=43935&search=trapeze#Section420. Accessed Nov. 18, 2013. - AskMayoExpert. Sepsis, severe sepsis and septic shock. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013. - AskMayoExpert. Skin and soft tissue infections. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2013. - Gibson LE (expert opinion). Mayo Clinic, Rochester, Minn. Nov. 21, 2013.
<urn:uuid:f482ba29-fa37-4602-9f3d-0bfd45022306>
seed
Items in AFP with MESH term: Infant, Newborn Risks and Benefits of Pacifiers - Article ABSTRACT: Physicians are often asked for guidance about pacifier use in children, especially regarding the benefits and risks, and when to appropriately wean a child. The benefits of pacifier use include analgesic effects, shorter hospital stays for preterm infants, and a reduction in the risk of sudden infant death syndrome. Pacifiers have been studied and recommended for pain relief in newborns and infants undergoing common, minor procedures in the emergency department (e.g., heel sticks, immunizations, venipuncture). The American Academy of Pediatrics recommends that parents consider offering pacifiers to infants one month and older at the onset of sleep to reduce the risk of sudden infant death syndrome. Potential complications of pacifier use, particularly with prolonged use, include a negative effect on breastfeeding, dental malocclusion, and otitis media. Adverse dental effects can be evident after two years of age, but mainly after four years. The American Academy of Family Physicians recommends that mothers be educated about pacifier use in the immediate postpartum period to avoid difficulties with breastfeeding. The American Academy of Pediatrics and the American Academy of Family Physicians recommend weaning children from pacifiers in the second six months of life to prevent otitis media. Pacifier use should not be actively discouraged and may be especially beneficial in the first six months of life. Hyperbilirubinemia in the Term Newborn - Article ABSTRACT: Hyperbilirubinemia is one of the most common problems encountered in term newborns. Historically, management guidelines were derived from studies on bilirubin toxicity in infants with hemolytic disease. More recent recommendations support the use of less intensive therapy in healthy term newborns with jaundice. Phototherapy should be instituted when the total serum bilirubin level is at or above 15 mg per dL (257 micromol per L) in infants 25 to 48 hours old, 18 mg per dL (308 micromol per L) in infants 49 to 72 hours old, and 20 mg per dL (342 micromol per L) in infants older than 72 hours. Few term newborns with hyperbilirubinemia have serious underlying pathology. Jaundice is considered pathologic if it presents within the first 24 hours after birth, the total serum bilirubin level rises by more than 5 mg per dL (86 micromol per L) per day or is higher than 17 mg per dL (290 micromol per L), or an infant has signs and symptoms suggestive of serious illness. The management goals are to exclude pathologic causes of hyperbilirubinemia and initiate treatment to prevent bilirubin neurotoxicity. ABSTRACT: Hirschsprung's disease (congenital megacolon) is caused by the failed migration of colonic ganglion cells during gestation. Varying lengths of the distal colon are unable to relax, causing functional colonic obstruction. Hirschsprung's disease most commonly involves the rectosigmoid region of the colon but can affect the entire colon and, rarely, the small intestine. The disease usually presents in infancy, although some patients present with persistent, severe constipation later in life. Symptoms in infants include difficult bowel movements, poor feeding, poor weight gain, and progressive abdominal distention. Early diagnosis is important to prevent complications (e.g., enterocolitis, colonic rupture). A rectal suction biopsy can detect hypertrophic nerve trunks and the absence of ganglion cells in the colonic submucosa, confirming the diagnosis. Up to one third of patients develop Hirschsprung's-associated enterocolitis, a significant cause of mortality. Patients should be monitored closely for enterocolitis for years after surgical treatment of Hirschsprung's disease. With proper treatment, most patients will not have long-term adverse effects and can live normally. ABSTRACT: Physicians should use a checklist to facilitate discussions with new parents before discharging their healthy newborn from the hospital. The checklist should include information on breastfeeding, warning signs of illness, and ways to keep the child healthy and safe. Physicians can encourage breastfeeding by giving parents written information on hunger and feeding indicators, stool and urine patterns, and proper breastfeeding techniques. Physicians also should emphasize that infants should never be given honey or bottles of water before they are one year of age. Parents should be advised of treatments for common infant complaints such as constipation, be aware of signs and symptoms of more serious illnesses such as jaundice and lethargy, and know how to properly care for the umbilical cord and genital areas. Physicians should provide guidance on how to keep the baby safe in the crib (e.g., placing the baby on his or her back) and in the car (e.g., using a car seat that faces the rear of the car). It is also important to schedule a follow-up appointment for the infant. Why Can't I Get My Patients to Exclusively Breastfeed Their Babies? - Curbside Consultation Vacuum-Assisted Vaginal Delivery - Article ABSTRACT: The second stage of labor is a dynamic event that may require assistance when maternal efforts fail to effect delivery or when there are nonreassuring fetal heart tones. Therefore, knowing how to perform an operative vaginal delivery with forceps or vacuum is vital for family physicians who provide maternity care. Vacuum is rapidly replacing forceps as the predominant instrument, but each has advantages and disadvantages, including increased risk of maternal trauma with forceps and increased risk of neonatal cephalohematoma with vacuum. Use of a second instrument if the first one fails is associated with worse outcomes. Routine episiotomy in operative vaginal delivery is no longer recommended. The "ABCDEFGHIJ" mnemonic can facilitate proper use and application of the vacuum device and minimize risks, and practicing the techniques on mannequins can provide an introduction to the skills of operative vaginal delivery. ABSTRACT: Recent innovations in medical technology have changed newborn screening programs in the United States. The widespread use of tandem mass spectrometry is helping to identify more inborn errors of metabolism. Primary care physicians often are the first to be contacted by state and reference laboratories when neonatal screening detects the possibility of an inborn error of metabolism. Physicians must take immediate steps to evaluate the infant and should be able to access a regional metabolic disorder subspecialty center. Detailed knowledge of biochemical pathways is not necessary to treat patients during the initial evaluation. Nonspecific metabolic abnormalities (e.g., hypoglycemia, metabolic acidosis, hyperammonemia) must be treated urgently even if the specific underlying metabolic disorder is not yet known. Similarly, physicians still must recognize inborn errors of metabolism that are not detected reliably by tandem mass spectrometry and know when to pursue additional diagnostic testing. The early and specific diagnosis of inborn errors of metabolism and prompt initiation of appropriate therapy are still the best determinants of outcome for these patients. Sudden Infant Death Syndrome - Article ABSTRACT: Sudden infant death syndrome is the leading cause of death among healthy infants, affecting 0.57 per 1,000 live births. The most easily modifiable risk factor for sudden infant death syndrome is sleeping position. To reduce the risk of sudden infant death syndrome, parents should be advised to place infants on their backs to sleep and avoid exposing the infant to cigarette smoke. Other recommendations include use of a firm sleeping surface and avoidance of sleeping with soft objects, bed sharing, and overheating the infant. Pacifier use appears to decrease the risk of sudden infant death syndrome, but should be avoided until one month of age in infants who are breastfed. The occurrence of apparent life-threatening events does not increase the risk of sudden infant death syndrome, and home apnea monitoring does not lower the risk of sudden infant death syndrome. Supine sleeping position has increased the incidence of flattening of the occiput (deformational plagiocephaly), but this condition can be prevented and treated by encouraging supervised "tummy time," meaning that when awake, infants should spend as much time as possible on their stomachs. All apparent deaths from sudden infant death syndrome should be carefully investigated to exclude other causes of death, including child abuse. Families who have an infant die from sudden infant death syndrome should be offered emotional support and grief counseling.
<urn:uuid:6ef78ef6-763f-4693-b10e-cdea38423c0c>
seed
The air within the lung at the end of a forced inspiration can be divided into four compartments or lung volumes (Fig. 32–1). The volume of air exhaled during normal quiet breathing is the tidal volume (Vt). The maximal volume of air inhaled above tidal volume is the inspiratory reserve volume (IRV), and the maximal air exhaled below tidal volume is the expiratory reserve volume (ERV). The residual volume (RV) is the amount of air remaining in the lungs after a maximal exhalation. The combinations or sums of two or more lung volumes are termed capacities (Fig. 32–1). Vital capacity (VC) is the maximal amount of air that can be exhaled after a maximal inspiration. It is equal to the sum of IRV, Vt, and ERV. When measured on a forced expiration, it is called the forced vital capacity (FVC). When measured over an exhalation of at least 30 seconds, it is called the slow vital capacity (SVC). The VC is approximately 75% of the total lung capacity (TLC), and when the SVC is within the normal range, a significant restrictive disorder is unlikely. Normally, the values for SVC and FVC are very similar unless airway obstruction is present. TLC is the volume of air in the lung after the maximal inspiration and is the sum of the four primary lung volumes (IRV, Vt, ERV, and RV). Its measurement is difficult because the amount of air remaining in the chest after maximal exhalation (RV) must be measured by indirect methods. The definition of restrictive lung disease is based on a reduction in TLC (i.e., an inability to get air into the lung or restriction to air movement on inhalation). The functional residual capacity (FRC) is the volume of air remaining in the lungs at the end of a quiet expiration. FRC is the normal resting position of the lung; it occurs when there is no contraction of either inspiratory or expiratory muscles and normally is 40% of TLC. Inspiratory capacity (IC) is the maximal volume of air that can be inhaled from the end of a quiet expiration and is the sum of Vt and IRV. FVC, which represents the total amount of air that can be exhaled, can be expressed as a series of timed volumes. The forced expiratory volume in the first second of expiration (FEV1) is the volume of air exhaled during the first second of the FVC maneuver. Although FEV1 is a volume, it conveys information on obstruction because it is measured over a known time interval. FEV1 depends on the volume of air within the lung and the effort during exhalation; therefore, it can be diminished by a decrease in TLC or by a lack of effort. A more sensitive way to measure obstruction is to express FEV1 as a ratio of FVC. This ratio is independent of the patient's size or TLC; therefore, FEV1/FVC is a specific measure of airway obstruction with or without restriction. Normally, this ratio is ≥75%, and any value <70% to 75% suggests obstruction. Because flow is defined as the change in volume with time, forced expiratory flow can be determined graphically by dividing the volume change by the time change. The forced expiratory flow (FEF) during 25% to 75% of FVC (FEF25%–75%) represents the mean flow during the middle half of the FVC. FEF25%–75%, formerly called the maximal midexpiratory flow, is reported frequently in the assessment of small airways. The 95% confidence limit is so wide that FEF25%–75% has limited utility in the early diagnosis of small airways disease in an individual subject. The peak expiratory flow (PEF), also called maximum forced expiratory flow (FEFmax), is the maximum flow obtained during FVC. This measurement is used often in the outpatient management of asthma because it can be measured with inexpensive peak flowmeters. All lung volumes and flows are compared with normal values obtained from healthy subjects. There are significant ethnic and racial variations in normal values, and all PFTs should report that race/ethnic adjustment factors have been used. This is especially important in African American subjects who exhibit a greater proportion of their height in the waist-to-leg length. If not corrected for ethnicity, many subjects will appear to have restrictive lung functions. The 2005 American Thoracic Society—European Respiratory Society (ATS—ERS) guidelines for interpretation of PFT results recommend that, for spirometry in the United States, the National Health and Nutrition Examination Survey (NHANES) III reference be used for subjects aged 8 to 80 years and the Wang equation used in subjects younger than 8 years.2 Spirometry is the most widely available and useful PFT. It takes only 15 to 20 minutes, carries no risks, and provides information about obstructive and restrictive disease. Spirometry allows for measurement of all lung volumes and capacities except RV, FRC, and TLC; it also allows assessment of FEV1 and FEF25%–75%. Spirometry measurements can be reported in two different formats—standard spirometry (Fig. 32–2) and the flow—volume loop (Fig. 32–3). In standard spirometry, the volumes are recorded on the vertical (y) axis and the time on the horizontal (x) axis. In flow—volume loops, volume is plotted on the horizontal (x) axis, and flow (derived from volume/time) is plotted on the vertical (y) axis. The shape of the flow—volume loop can be helpful in differentiating obstructive and restrictive defects and in diagnosing upper airway obstruction (Fig. 32–4). This curve gives a visual representation of obstruction because the expiratory descent becomes more concave with worsening obstruction. Standard spirometry. Curve 1 is for a normal subject with normal FEV1; curve 2 is for a patient with mild airways obstruction; curve 3 is for a patient with moderate airways obstruction; curve 4 is for a patient with severe airways obstruction. (BPTS, body temperature saturated with water vapor.) Normal flow—volume loop. Flows are measured on the vertical (y) axis, and lung volumes are measured on the horizontal (x) axis. Forced vital capacity (FVC) can be read from the tracing as the maximal horizontal deflection. Instantaneous flow (V̇max ) at any point in FVC also can be measured directly. (FEF50%, forced expiratory flow at 50% of forced vital capacity; PEF, peak expiratory flow; PIF, peak inspiratory flow; RV, residual volume; TLC, total lung capacity.) A. Flow—volume loop depicting mild obstruction characterized by decrease flow at low lung volumes. B. Moderate airflow obstruction characterized by a more concave curve. C. Variable intrathoracic obstruction in which peak flow is decreased at higher lung volumes with normalization of curve at lower lung volumes. D. Restrictive lung disease with a curve that is decreased in width but with a normal shape. (RV, residual volume; TLC, total lung capacity.) Spirometry measures three of the four basic lung volumes but cannot measure RV. RV must be measured to determine TLC. TLC should be measured anytime VC is reduced. In the setting of chronic obstructive pulmonary disease (COPD) and a low VC, measurement of TLC can help to determine the presence of a superimposed restrictive disorder. The four methods for measuring TLC are helium dilution, nitrogen washout, body plethysmography, and chest x-ray measurement (planimetry). The first two methods are called dilution techniques and only measure lung volumes in communication with the upper airway. In patients with airway obstruction who have trapped air, dilution techniques will underestimate the actual volume of the lungs. Planimetry measures the circumference of the lungs on the posteroanterior view and lateral views of a chest x-ray film and estimates the total lung volume. Body plethysmography, or body box, is the most accurate technique for lung volume determinations. It measures all the air in the lungs, including trapped air. The principle of the measurement of the body box is Boyle's gas law (P1V1 = P2V2): A volume of gas in a closed system varies inversely with the pressure applied to it. The changes in alveolar pressure are measured at the mouth, as well as pressure changes in the body box. The volume of the body box is known. Lung volumes can be determined measuring the changes in pressures caused by panting against a closed shutter.2 Measurement of lung volumes provides useful information about elastic recoil of the lungs. If elastic recoil is increased (as in interstitial lung disease), lung volumes (TLC) are reduced. When elastic recoil is reduced (as in emphysema), lung volumes are increased. Carbon Monoxide Diffusing Capacity The diffusing capacity of the lungs (Dl) is a measurement of the ability of a gas to diffuse across the alveolar—capillary membrane. Carbon monoxide is the usual test gas because normally it is not present in the lungs and is much more soluble in blood than in lung tissue. When the diffusing capacity is determined with carbon monoxide, the test is called the diffusing capacity of lung for carbon monoxide (Dlco). Because Dlco is directly related to alveolar volume (Va), it frequently is normalized to the value Dl/Va, which allows for its interpretation in the presence of abnormal lung volumes (e.g., after surgical lung resection). The diffusing capacity will be reduced in all clinical situations where gas transfer from the alveoli to capillary blood is impaired.3 Common conditions that reduce Dlco include lung resection, emphysema (loss of functioning alveolar—capillary units), and interstitial lung disease (thickening of the alveolar—capillary membrane). Normal PFTs with reduced Dlco should suggest the possibility of pulmonary vascular disease (e.g., pulmonary embolus) but also can be seen with anemia, early interstitial lung disease, and mild Pneumocystis jiroveci pneumonia (PCP) infection in patients with acquired immune deficiency syndrome. Obstructive lung disease implies a reduced capacity to get air through the conducting airways and out of the lungs. This reduction in airflow may be caused by a decrease in the diameter of the airways (bronchospasm), a loss of their integrity (bronchomalacia), or a reduction in elastic recoil (emphysema) with a resulting decrease in driving pressure. The most common diseases associated with obstructive pulmonary functions are asthma, emphysema, and chronic bronchitis; however, bronchiectasis, infiltration of the bronchial wall by tumor or granuloma, aspiration of a foreign body, and bronchiolitis also cause obstructive PFTs. The standard test used to evaluate airway obstruction is the forced expiratory spirogram. Standard spirometry and flow—volume loop measurements include many variables; however, according to ATS guidelines, the diagnosis of obstructive and restrictive ventilatory defects should be made using the basic measurements of spirometry.3 A reduction in FEV1 (with normal FVC) establishes the diagnosis of obstruction. When both FEV1 and FVC are reduced, FEV1 cannot be used to assess airway obstruction because such patients may have either obstruction or restriction. In restrictive lung disease, the patient has an inability to get air into the lung, which results in a reduction of all expiratory volumes (FEV1, FVC, and SVC). In obstructed patients, a better measurement is the ratio FEV1/FVC. Patients with restrictive lung disease have reduced FEV1 and use of reduced FVC, but FEV1/FVC remains normal. Although a normal FEV1/FVC ratio is >70% to 75%, the ratio is age dependent, and slightly lower values may be normal in older patients. Younger children have increased lung elastic recoil and may have higher ratios. Children with asthma should have a FEV1/FVC ≥85%. According to the 2007 National Asthma Education and Prevention Program, any value below this value should be considered a sign of obstruction, even if the FEV1 and FVC are within the normal range. Caution should be used in interpreting obstruction when FEV1/FVC is below normal, but FEV1 and FVC both are within the normal range because this pattern can be seen with healthy, athletic subjects as well as subjects with mild asthma. Clinical judgment and response to bronchodilator challenge are often required to separate out these two groups. In children, the improvement in FEV1 often is the only way to document mild-to-moderate obstructive lung disease. In screening spirometry performed in office practice, FEV6 (forced expiratory volume in 6 seconds) can be used in place of FVC. FEV6 is a more reproducible number when obtained by less skilled personnel. The measurement of FEF25%–75% also is abnormal in patients with obstructive airways disease. In general, this test has so much variability that it adds little to the measurement of FEV1 and FEV1/FVC. FEF25%–75% has been of value in monitoring lung transplant patients for graft rejection,4 and a reduced value may be an early indicator of acute rejection. Although there is no standardization for interpretation of severity of obstruction, most pulmonary laboratories state that FEV1/FVC <70% of the predicted value is diagnostic for obstruction, and the degree of obstruction then is based on the percent predicted of FEV1. FEV1 <60% of the predicted value is moderate obstruction, and <40% of the predicted value is severe obstruction. In patients with obstruction, a dose of a bronchodilator (e.g., albuterol or isoproterenol) by metered-dose inhaler is given during the initial examination. An increase in FEV1 of >12% and >0.2 L suggests an acute bronchodilator response.3 Because bronchodilator responsiveness is variable over time, the lack of an acute bronchodilator response should not preclude a 6- to 8-week trial of bronchodilators and/or corticosteroids. Although all patients with obstructive lung disease of any etiology will have reduced flow rates on forced exhalation, the pattern on PFTs may be helpful in differentiating among the various etiologies (Table 32–1). Asthma is characterized by variable obstruction that often improves or resolves with appropriate therapy. Because asthma is an inflammatory disorder of the airways (predominantly large airways), Dlco is normal. Most patients with acute asthma have a bronchodilator response >15% to 20%; however, this response is also seen in 20% of patients with COPD. These patients are said to have asthmatic bronchitis. Chronic bronchitis may be limited to the airways, but the vast majority of patients with chronic bronchitis and airway obstruction have a mixture of bronchitis and emphysema and have a reduction in Dlco. Therefore, Dlco is the best PFT for separating asthma from COPD. Table 32-1 Specific Patterns of Pulmonary Function in Patients with Chronic Obstructive Pulmonary Disease | Save Table Table 32-1 Specific Patterns of Pulmonary Function in Patients with Chronic Obstructive Pulmonary Disease |Increased airway resistance||++++||++++||+| |Response to bronchodilators||++++||+b||−b| After the diagnosis of obstructive airways disease is established, the course and response to therapy are best followed by serial spirometry. The multicenter Lung Health Study demonstrated an abnormally rapid decline (90–150 mL/y) in patients with COPD who continue to smoke.5 Smoking cessation often resulted in an increase in FEV1 during the first year and a near-normal rate of decline (30–50 mL/y) in subsequent years.
<urn:uuid:a508ea35-7464-4539-96e4-7ee994b43900>
seed
Far too few Americans with prediabetes know they have the condition, according to a new report from the Centers for Disease Control and Prevention (CDC). In 2010, about one in three U.S. adults 20 and older (about 79 million people) had prediabetes, which can lead to full-blown type 2 diabetes, but only about 11% were aware of it. Treating prediabetes early with dietary changes, weight loss, and increased physical activity can help prevent or delay the onset of type 2 diabetes. With prediabetes, a person has a blood glucose level (the major blood sugar) that is higher than normal but not high enough for a diagnosis of diabetes. According to the CDC and the American Diabetes Association (ADA), without lifestyle changes such as weight loss and exercise, 15% to 30% of people with prediabetes—which does not result in any symptoms that would alert someone to the condition—will develop type 2 diabetes within 5 to 10 years of the onset of prediabetes. With type 2 diabetes, a person's body does not produce enough insulin or isn't able to make use of the insulin the body does produce. People need insulin to be able to use glucose for energy. When you eat food, the body breaks down all of the sugars and starches, releasing glucose, the basic energy fuel for the body's cells. Insulin takes the sugar from the blood into the cells. When glucose builds up in the blood instead of being transported into cells, it can lead to very serious complications such as heart attacks, strokes, kidney failure, blindness, and poor blood circulation leading to the need for limb amputations. According to the CDC, risk factors for both prediabetes and type 2 diabetes include: - Being over age 45 - Being overweight or obese - Having a family history of diabetes - Having an African American, Hispanic/Latino, American Indian, Asian American, or Pacific Islander racial or ethnic background - Having a history of diabetes while pregnant (gestational diabetes) - Having given birth to a baby weighing 9 pounds or more - Being physically active less than 3 times a week Two blood tests are used to determine if someone has prediabetes—a fasting glucose test and an A1C test to screen for high blood glucose levels, also known as hyperglycemia: - Fasting glucose: This test measures the level of glucose in the blood after an 8-12 hour fast. The test is often done as part of a regular physical, when someone has symptoms suggesting high or low blood glucose levels or during pregnancy and requires a blood sample drawn from a vein in the arm or a skin prick. Non-diabetic value is less than 100 mg/dL. - A1c: This test measures the average amount of glucose in the blood over the last 2 to 3 months and is used to diagnose prediabetes and diabetes, to monitor the progression of diabetes, and to help make treatment decisions. Non-diabetic value is 5.7% or less. Having the tests, getting the results, and then reviewing them with your doctor are all critical. According to the ADA, while prediabetes increases your risk for diabetes and its complications, making changes in diet and exercise can reduce your risk of progressing to full-blown diabetes. ADA recommendations include: - If testing shows that your blood glucose levels are in the normal range, get rechecked every 3 years, or more often if your doctor recommends that. - If testing determines that you do have prediabetes, you should be checked for type 2 diabetes every 1 to 2 years. - If you have prediabetes, you can reduce your risk of progressing to type 2 diabetes by almost 60% by losing 7% of your body weight (that's about 15 pounds if you weigh 200 pounds) and by getting moderate exercise, such as brisk walking just 30 minutes a day, 5 days a week. - For some people, taking medication for prediabetes can return blood glucose levels to normal. "Because the vast majority of persons with prediabetes are unaware of their condition, identification and improved awareness of prediabetes are critical first steps to encourage those with prediabetes to make healthy lifestyle changes or to enroll in evidence-based, lifestyle-change programs aimed at preventing type 2 diabetes," say the study authors. The National Diabetes Education Program, a partnership of the National Institutes of Health and CDC, has resources that can help reduce the risk of type 2 diabetes, including "Small Steps. Big Rewards. Your Game Plan to Prevent Type 2 Diabetes" and "Just One Step," which has information on making important lifestyle changes. Check your risk for prediabetes today by taking the CDC's "Am I at Risk?" quiz. On this site NOTE: This article is based on research that utilizes the sources cited here as well as the collective experience of the Lab Tests Online Editorial Review Board. This article is periodically reviewed by the Editorial Board and may be updated as a result of the review. Any new sources cited will be added to the list and distinguished from the original sources used. (March 22, 2013) Adler, C. Most People with Prediabetes Are Unaware of Their Condition. JournalWatch. Available online at http://firstwatch.jwatch.org/cgi/content/full/2013/322/3 through http://firstwatch.jwatch.org. Accessed April 15, 2013. (March 22, 2013) Centers for Disease Control and Prevention. Awareness of Prediabetes — United States, 2005–2010. MMWR 62(11); 209-212. Available online at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6211a4.htm through http://www.cdc.gov. Accessed April 15, 2013. (October 5, 2012) Centers for Disease Control and Prevention. Prediabetes Facts. Available online at http://www.cdc.gov/diabetes/prevention/factsheet.htm through http://www.cdc.gov. Accessed, April 15 2013. (August 13, 2012) Centers for Disease Control and Prevention. Prediabetes. Available online at http://www.cdc.gov/diabetes/consumer/prediabetes.htm through http://www.cdc.gov. Accessed April 16, 2013. Centers for Disease Control and Prevention. Press Kit: Diabetes. PDF available for download at http://www.cdc.gov/media/presskits/aahd/diabetes.pdf through http://www.cdc.gov. Accessed April 15, 2013. (October 9, 2012) Centers for Disease Control and Prevention. Prediabetes: Am I At Risk? Available online at http://www.cdc.gov/diabetes/prevention/prediabetes.htm through http://www.cdc.gov. Accessed April 16, 2013. (©2013) American Diabetes Association. Top Five Things You Need to Know About Prediabetes. Available online at http://www.diabetes.org/diabetes-basics/prevention/pre-diabetes/pre-diabetes-faqs.html through http://www.diabetes.org. Accessed April 21, 2013. (©2013) American Diabetes Association. Prediabetes Facts. Available online at http://www.diabetes.org/diabetes-basics/prevention/pre-diabetes/pre-diabetes-faqs.html through http://www.diabetes.org. Accessed April 15, 2013.
<urn:uuid:b183387e-7987-4a45-b88c-a864b61d32ff>
seed
On 27 September 2011, orphan designation (EU/3/11/901) was granted by the European Commission to Merck Sharp & Dohme Limited, United Kingdom, for dinaciclib for the treatment of chronic lymphocytic leukaemia. - What is chronic lymphocytic leukaemia? Chronic lymphocytic leukaemia (CLL) is cancer of a type of white blood cell called B lymphocytes. In this disease, the lymphocytes multiply too quickly and live for too long, so that there are too many of them circulating in the blood. The cancerous lymphocytes look normal, but they are not fully developed and do not work properly. Over a period of time, the abnormal cells replace the normal white blood cells, red blood cells and platelets (components that help the blood to clot) in the bone marrow (the spongy tissue inside the large bones in the body). CLL is the most common type of leukaemia and mainly affects older people. It is rare in people under the age of 40 years. CLL is a long-term debilitating and life-threatening disease because some patients develop severe infections. - What is the estimated number of patients affected by the condition? At the time of designation, chronic lymphocytic leukaemia affected approximately 3 in 10,000 people in the European Union (EU)*. This is equivalent to a total of around 152,000 people, and is below the ceiling for orphan designation, which is 5 people in 10,000. This is based on the information provided by the sponsor and the knowledge of the Committee for Orphan Medicinal Products (COMP). *Disclaimer: For the purpose of the designation, the number of patients affected by the condition is estimated and assessed on the basis of data from the European Union (EU 27), Norway, Iceland and Liechtenstein. This represents a population of 506,300,000 (Eurostat 2011). - What treatments are available? Treatment for CLL is complex and depends on a number of factors, including the extent of the disease, whether it has been treated before, and the patient’s age, symptoms and general state of health. Patients whose CLL is not causing any symptoms or is only getting worse very slowly may not need treatment. Treatment for CLL is only started if symptoms become troublesome. At the time of designation, the main treatment for CLL was chemotherapy (medicines to treat cancer). The sponsor has provided sufficient information to show that dinaciclib might be of significant benefit for patients with chronic lymphocytic leukaemia because it works in a different way to existing treatment and early studies indicate that it may improve the outcome of patients with this condition. This assumption will need to be confirmed at the time of marketing authorisation, in order to maintain the orphan status. - How is this medicine expected to work? Dinaciclib blocks the action of specific enzymes (specialised proteins) called cyclin-dependent kinases (CDK), which are involved in regulating the cell cycle. The cell cycle is the series of events that takes place in a cell leading to its division and replication in the body. In order to become active, CDK has to attach to proteins called cyclins. In cancer cells, there are many of these cyclins and CDK becomes abnormally active leading to the growth and spread of cancer cells. By targeting CDK and blocking its action, dinaciclib is expected to interfere with the cell cycle of cancer cells in CLL and to bring about their death. - What is the stage of development of this medicine? The effects of dinaciclib have been evaluated in experimental models. At the time of submission of the application for orphan designation, clinical trials with dinaciclib in patients with CLL were ongoing. At the time of submission, dinaciclib was not authorised anywhere in the EU for CLL or designated as an orphan medicinal product elsewhere for this condition. In accordance with Regulation (EC) No 141/2000 of 16 December 1999, the COMP adopted a positive opinion on 15 July 2011 recommending the granting of this designation. - Opinions on orphan medicinal product designations are based on the following three criteria - the seriousness of the condition; - the existence of alternative methods of diagnosis, prevention or treatment; - either the rarity of the condition (affecting not more than 5 in 10,000 people in the EU) or insufficient returns on investment. Designated orphan medicinal products are products that are still under investigation and are considered for orphan designation on the basis of potential activity. An orphan designation is not a marketing authorisation. As a consequence, demonstration of quality, safety and efficacy is necessary before a product can be granted a marketing authorisation. |Name||Language||First published||Last updated| |EU/3/11/901: Public summary of opinion on orphan designation: Dinaciclib for the treatment of chronic lymphocytic leukaemia||(English only)||17/10/2011| |Disease/condition||Treatment of chronic lymphocytic leukaemia| |Date of decision||27/09/2011| |Orphan decision number||EU/3/11/901| Review of designation The Committee for Orphan Medicinal Products reviews the orphan designation of a product if it is approved for marketing authorisation. Sponsor’s contact details: Merck Sharp & Dohme Limited Hertfordshire EN11 9BU Telephone: +44 1992 467272 Telefax: +44 1992 705507 For contact details of patients’ organisations whose activities are targeted at rare diseases see: - Orphanet, a database containing information on rare diseases which includes a directory of patients’ organisations registered in Europe. - European Organisation for Rare Diseases (EURORDIS), a non-governmental alliance of patient organisations and individuals active in the field of rare diseases.
<urn:uuid:860e66df-62ef-4418-94fb-edea588803ef>
seed
A new vaccine made from brain cancer patients' own tumor cells has been shown to improve survival time by nearly 50 percent in patients with glioblastoma multiforme, an aggressive cancer that typically kills patients within 15 months of diagnosis, according to a new Phase II trial. The findings, announced on Tuesday at the American Association of Neurological Surgeons (AANS) meeting in Miami, found that 40 patients treated with combination of the injectable vaccine and standard treatment had an increased average survival time of nearly 48 weeks, 15 weeks longer than patients who received only standard therapy who lived 33 weeks. The findings show that the six-month survival rate was 93 percent for the vaccinated group compared to 68 percent for the other 86 cancer patients treated with other therapies, and researchers reported that several of the vaccinated patients are still alive after more than a year. The reoccurring cancer has traditionally been treated with surgery, radiotherapy and chemotherapy. The HSPPC-96 vaccine is made of heat shock proteins from the patient’s own surgically removed tumors, and works by inducing an immune response against the cancer tumor Heat shock proteins, found in virtually all living organisms, from bacteria to humans, play an important role in protein folding, intracellular protein trafficking, and coping with proteins degraded by heat and other stresses. “These results are provocative. They suggest that doctors may be able to extend survival even longer by combining the vaccine with other drugs that enhance this immune response,” lead investigator neurosurgeon Dr. Andrew Parsa, from the University of California at San Francisco, said in a statement. Parsa said that researchers will now be conducting a more extensive, randomized clinical trial to determine the vaccine’s effectiveness combined with the chemotherapy drug Avastin. Researchers said that about 17,000 Americans are diagnosed with glioblastoma each year, and only 2 percent of patients survive longer than five years, even with treatment. In 2010, federal regulators approved the first therapeutic cancer vaccine for prostate cancer, and currently many other cancer vaccines are in clinical trial.
<urn:uuid:84633859-931c-48a5-b66c-60cef3a64dad>
seed
- Our Providers - Clinic Services - Patient Information - Philosophy of Care - Financial Matters - What Sets Us Apart - Patient Education - Our Laboratory Infertility affects approximately 6 million people in the United States or roughly one out of seven couples. Infertility affects men and women equally with about one-third of infertility cases attributed to male factors and one-third to female factors. For the remaining one-third of infertile couples, infertility is caused by a combination of problems in both partners or, in about 20% of those cases, is unexplained. Some factors that have been attributed to causing infertility range from problems with ovulation, endometriosis, problems with sperm, and delay of childbearing. How Do I Know If I'm Infertile? A couple might suspect they have fertility problems if they have had unprotected intercourse for 12 months without conception; or if the woman is 35 years and older and has gone 6 months without conception. Women are also considered infertile if they have repeated miscarriages. Many people don't think about fertility problems until they actually start trying for a baby. Although the causes of infertility are often varied and unknown, there are some signs and symptoms that could indicate potential fertility problems for both men and women. In women, irregular periods (not having periods about every 25-28 days), or no periods at all could indicate a problem with ovulation and your hormones not working properly. Painful or heavy periods could indicate there is something wrong with the lining of your womb or endometrium. Benign growths such as a polyp or fibroid, or tissue may be growing in other places--this is a condition known as endometriosis. For men, the most common cause of male infertility is related to sperm. Sperm related problems include low sperm count or no sperm, sperm that don't move quickly enough and sperm that are not formed correctly. In some cases, tubes inside the male reproductive organs are blocked. The Initial Visit It is important that both partners be tested initially to assess and determine the potential causes of infertility. Whenever possible, we encourage both partners to attend the first visit together. The Female Evaluation includes: - A physical exam and review of your medical history. - A pelvic ultrasound to look at your ovaries and uterus. - Hormonal testing (blood tests). - A hysterosalpingogram (HSG) to see if the fallopian tubes are open. (if necessary) The Male Evaluation includes: - Semen analysis** **If the semen analysis reveals abnormalities, the male may need to consult a urologist who specializes in male infertility. Once our physician has had an opportunity to do a full evaluation, he can review the various treatment options and give you a reasonable idea of your chances of achieving pregnancy. Thanks to the many options available, many of our patients will be able to experience the joy of parenthood. We invite you to give us a call. We’re standing by ready to help you build the family of your dreams.
<urn:uuid:42d98932-3d5b-4364-8e88-5bdfe4800da5>
seed
What Is a Stroke? Types of strokes A stroke occurs when blood flow to the brain is interrupted, either by a clot or a rupture in a blood vessel. When the part of the brain that’s deprived of blood can no longer get the oxygen and other nutrients it needs, it begins to die. There are two types of stroke: - Ischemic strokes occur when a clot blocks a blood vessel to the brain. If blood flow is blocked only temporarily, it results in a transient ischemic attack (TIA), also known as a ministroke. - Hemorrhagic strokes occur when a blood vessel ruptures, causing blood to leak into the brain. Ischemic strokes, which account for more than 80 percent of all strokes, occur when a clot obstructs a blood vessel that supplies blood to the brain. Ischemic strokes usually result from atherosclerosis, a condition in which fatty deposits develop on the walls of blood vessels. There are two ways in which atherosclerosis can cause an ischemic stroke: - In cerebral thrombosis, a blood clot develops right at the clogged part of the vessel. - In a cerebral embolism, a blood clot forms in another location, usually the heart or large arteries of the chest and neck. If part of the clot breaks loose, it will travel through the bloodstream until it reaches a blood vessel that's too small to allow its passage. When this occurs in the brain, the result is a stroke. Ischemic strokes can also result from atrial fibrillation, a condition in which the upper chambers of the heart beat rapidly and irregularly, causing blood to pool and clot. If one of these clots breaks loose, it can result in a stroke. Transient ischemic attacks Sometimes called ministrokes, transient ischemic attacks (TIAs) are warning signs that an ischemic stroke may be looming on the horizon. In a TIA, the blockage of blood flow to the brain is only temporary, so symptoms disappear after a short time. But since TIAs are often precursors of a major stroke, they should be taken just as seriously. In a hemorrhagic stroke, a blood vessel actually ruptures and bleeds into the brain. This can occur in two ways: - In an intracerebral hemorrhage, a ruptured blood vessel bleeds directly into the brain tissue. As blood pools in the brain, it compresses the surrounding brain tissue and may also cause a sudden increase in pressure within the brain. The affected brain cells can be damaged and may begin to die. - A subarachnoid hemorrhage occurs when a blood vessel outside of the brain ruptures, filling the subarachnoid space (the area of skull surrounding the brain) with blood. This causes a sudden increase in pressure around the brain, which may result in rapid loss of consciousness or death. Hemorrhagic strokes usually occur when a blood vessel is already weakened in one of two ways: - In an aneurysm, a weakened region of a blood vessel stretches out like a balloon. If left untreated, the ballooning vessel may continue to stretch until it bursts. - An arteriovenous malformation (AVM) is a cluster of abnormally formed blood vessels. Although AVMs don't always cause problems, these vessels are more likely to rupture.
<urn:uuid:00973ead-a31b-4f8b-b493-3195585e8f4d>
seed
Congenital heart disease occurs in 9 of every 1,000 live births. About 25percent of these babies will have critical congenital heart disease (CCHD) whereby surgery or transcatheter intervention is required in the first year of life. In the United States, almost all types of congenital heart defects can be surgically repaired or palliated, and survival rates continue to improve. Early recognition and timely intervention can improve outcomes for these patients. The physical findings consistent with congenital heart disease, such as heart murmurs, tachypnea, or overt cyanosis may not be evident before the newborn is discharged from the hospital. Pulse-oximetry monitoring, a noninvasive method to determine oxygen saturation and identify hypoxemia has been proposed as one strategy for early detection of CCHD. In August 2009, the American Academy of Pediatrics (AAP) and the American Heart Association (AHA) reviewed the available evidence and published a statement regarding the use of pulse oximetry to detect critical congenital heart disease in newborns. They concluded that it is a viable strategy to improve early detection of CCHD. In September 2010, the US Health and Human Services (HHS) Secretary's Advisory Committee on Heritable Disorders in Newborns and Children (SACHDNC) recommended that critical congenital cyanotic heart disease be added to the uniform newborn screening panel. Their goal was to identify early in life those newborns with structural heart defects usually associated with hypoxia that could result in significant morbidity or death with closing of the ductus arteriosus or other physiologic changes in the newborn period. An expert technical panel recommended 7 specific lesions as the primary targets for screening: hypoplastic left heart syndrome (HLHS); pulmonary atresia; tetralogy of fallot; total anomalous pulmonary venous return; transposition of the great arteries; tricuspid atresia; and truncus arteriosus. In January 2011, a work group chosen by the SACHDNC, the AAP, the AHA and the American College of Cardiology Foundation (ACCF) was convened to outline screening implementation strategies. The meeting focused on recommendations for pulse-oximetry screening for CCHD, developing service infrastructure for follow-up and addressing knowledge gaps. The work group recognized that many newborns with the targeted congenital heart defects do not develop "clinically appreciable cyanosis" until after discharge from the hospital. And, with some lesions such as HLHS newborns have significant cardiovascular compromise without apparent cyanosis. Therefore, they recommended that the SACHDNC rename the target conditions "critical congenital heart disease" (CCHD). The word cyanotic was omitted. The work group recommendations for a standardized approach to screening and diagnostic follow-up were published in Pediatrics in November 2011, "Strategies for Implementing Screening for Critical Congenital Heart Disease." Pulse oximetry cannot detect all cases of CCHD, and parents and caretakers should be advised that a negative test result does not exclude the possibility of heart disease.
<urn:uuid:2aa007b1-19f7-4cef-a4d1-6b2cd001e54b>
seed
Many terms are used to identify smokers who don't smoke daily: intermittent smokers, social smokers, or light and occasional smokers. Social smokers are usually young, nondaily smokers who smoke in the presence of other people rather than alone. Light and occasional smokers fall in between intermittent and social smokers. For this article, occasional, intermittent, light and social smokers will be included under the umbrella term "social smoker," since cessation efforts should be directed toward all categories. Health Consequences of Social Smoking According to the Centers for Disease Control1 and other research, smoking causes a multitude of health problems: coronary heart disease leading to stroke and heart attack; peripheral vascular disease; abdominal aortic aneurysm; emphysema; bronchitis; and chronic airway obstruction. Smoking also causes decreased bone mass in postmenopausal women, complications in pregnant women, and multiple types of cancer.1 In the past, attention to tobacco-related disease focused mainly on regular smokers. But there is growing evidence that even social smokers experience greater health risks compared to nonsmokers.2,3 Okuyemi et al4 found that the risks of coronary heart disease among light smokers are similar to those among regular smokers. Other behavioral and health risk factors for young adult social smokers include high alcohol use, unsafe driving practices, less exercise, depression and utilization of emergency mental services.3 Young adults can become as dependent on nicotine as adults.5 Addiction to tobacco is clear after 100 cigarettes1 or within a month of initiation, even after smoking only a few cigarettes.3 Levinson et al6 state that symptoms of tobacco abuse develop rapidly and no minimum nicotine dose or duration has been linked to an addiction level.7 The challenge in treating college students and social smokers is that they are more resistant to antismoking efforts and they do not see themselves as smokers. College students who use tobacco intermittently tend to believe they will not become addicted and that no health risks are associated with smoking.3 Nicotine addiction involves factors other than physical dependence.8 Barriers to cessation mentioned by college students include emotional triggers for smoking, social aspects and the habit of smoking.9 Women in college tend to smoke in more emotional situations whereas men in college smoke in peer situations. Regardless of gender, 75% of college smokers mentioned emotional situations as reasons for tobacco use and that tobacco use was a learned habit for coping with stressful situations.10 Motivations associated with tobacco use often hinder cessation efforts. Research by Rasmussen-Cruz et al10 found that students at Mexican universities believed that tobacco use could help them form friendships. Students also report that peers often downplay the health risks associated with tobacco use. Beginner smokers see healthy smokers and dismiss the notion that cigarettes are harmful. Most college-age smokers do not seek cessation help and therefore don't formulate a plan to quit. Research shows that many students view presenting to a health clinic for cessation help as a personal failure.11 Identifying Social Smokers Screening for tobacco use should occur at every office visit. The provider should ask about daily use of tobacco as well as social smoking. A form that asks only whether the patient is a smoker may fail to identify 50% or more of college students who are currently smoking.6 A better question to identify social smokers is, "In the last 3 months, have you smoked cigarettes at all, even a puff?"3 To identify social smokers, researchers have determined that Internet and phone screening tools are also effective.12 Tobacco intervention should be addressed early in students' college careers. Students who decrease the amount they smoke while in college are more likely to quit prior to graduation. Those who do not decrease the amount they smoke while in college are less likely to have a desire to quit with each passing year.13 Public Health Service (PHS) guidelines14 recommend brief interventions at every office visit. This includes the 5 A's, a strategy that identifies cessation desires. The mnemonic refers to: Ask, Advise, Assess, Assist and Arrange for follow-up.15 Murphy-Hoefer et al16 found that young adult college students respond more positively when advertisements focus on the health consequences of tobacco use. This was supported by Wolburg,11 with the exception that health consequences should not focus on death since young students are not able to relate to something thought to be far in the future. Using screening tools enables the clinician to link smoking behaviors to cessation tools. The Reasons for Smoking Scale (RSS) was developed based on Silvan Tomkin's affect management model for smoking.17 According this model, people smoke for emotional motives - either to enhance positive affect or to decrease negative affect. The RSS did not address social smoking. The RSS has been modified to include the subscale of social smoking. This subscale was derived from Russell et al18 in addition to traditional subscales developed by Ikard et al.19 The category focuses on handling, pleasure, habit/automatism, stimulation, and tension reduction/relaxation associated with smoking.20 The revised scale is called the Modified Reasons for Smoking Scale (MRSS), and it was first used with French smokers. The MRSS scale with the subsets allows the clinician to determine addiction plus behavioral reasons for smoking. The MRSS has demonstrated validity and reliability when used with patients in France, Brazil and the Netherlands.8,21 Identifying the motives of a college-aged tobacco user requires the establishment of a therapeutic relationship, and counseling should be a component of this.10 Berg et al9 identified four motivators for the college smoker to quit: cost of smoking, health concerns, improving fitness level and the stigma of being a smoker. In the adolescent population, nicotine replacement has been essentially ineffective.15 Instead of nicotine replacement, Abroms and colleagues22 tried a different approach to enhance tobacco cessation rates. They issued the X-Pack Smoking Cessation Kit and provided counseling. The kit included a quitting booklet, X-Pack quit cards (explained reasons to quit smoking), Success-O-meter/Ick-U-Lator (explained risks and costs associated with smoking), Wrigley's Orbit chewing gum, Hotlix cinnamon toothpicks and preoccupation putty (to help with cravings). At 6 months, the quit rate based on self-report was about twice as high in the X-Pack group compared to the natural quit rate. A smoke-free policy for college campuses has been advocated to discourage tobacco use. In a national sample of college students, less tobacco use was documented among students living in smoke-free residences.23 Smoke-free policies can reinforce positive social networks and preferences.6 Restricting smoking in places where students socialize is important to help reduce social smokers.24 These statements are supported by the American College Health Association,25 which states that all college campuses should be smoke-free. Time to Intervene Tobacco use by people who are not chronic smokers has been difficult to define. Social smoking has been identified in various studies and identifies someone who usually smokes in a social setting. Haleprin et al3 found that more than 28% of college students smoke cigarettes and despite intentions to quit, they smoke throughout college and beyond. Social smoking has been linked to increased health risks.2 Social smokers have difficulty with smoking cessation. Studies suggest that social smokers do not see themselves as smokers, deny health risks associated with social smoking, believe they can quit anytime they want, and do not believe they can become addicted to cigarettes with occasional use.3 Tobacco industries often target the college-age population, linking smoking with positive social peer encounters with smoking and decreased anxiety.24 Social smokers tend to get missed during routine tobacco screening because they do not perceive themselves as smokers. Questions that ask only if they are smokers miss this population. Questions targeting "any" smoking in the last 3 months will help identify the social smoker. Once identified, finding proper cessation techniques poses the next challenge. Targeting the reason and triggers for smoking and linking this to education will help with cessation. Patients are more successful at quitting when they are included in their cessation plan. The MRSS helps find the core reasons a social smoker is smoking.20 Nicotine replacement has not been as successful within this unique population.15 The utilization of the X-Pack improved cessation rates in at least one study.22 Studies examining anti-tobacco education found that health risks where a motivator for quitting,17 except when targeting death as a risk.11 Nonsmoking campuses also reduce the amount social smokers are able to smoke, taking away that peer-smoking link. 1. Centers for Disease Control and Prevention. Health effects of cigarette smoking. http://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_effects/effects_cig_smoking/ 2. Okuyemi K, et al. Relationship between smoking reduction and cessation among light smokers. Nicotine Tob Res. 2010;12(10):1005-1010. 3. Halperin AC, et al. Cigarette smoking and associated health risks among students at five universities. Nicotine Tob Res. 2010;12(2):96-104. 4. Okuyemi K, et al. Light smokers: issues and recommendations. Nicotine Tobacco Res. 2002;4(Suppl 2):S103-S107. 5. Orleans CT. Preventing tobacco-caused cancer: A call to action. Environ Health Perspect. 1995;103(Suppl 8):149-152. 6. Levinson AH, et al. Smoking, but not smokers: identity among college smokers who smoke cigarettes. Nicotine Tob Res. 2007;9(8):845-852. 7. DiFranza JR, et al. The development sequence of tobacco withdrawal symptoms of wanting, craving and needing. Pharmacol Biochem Behav. 2012;100(3):494-497. 8. Souza ES, et al. University of Sao Paula reasons for smoking scale: a new tool for the evaluation of smoking motivation. J Bras Pneumol. 2010;36(6):768-778. 9. Berg CJ, et al. Defining "smoker": College student attitudes and related smoking characteristics. Nicotine Tob Res. 2010;12(9):963-969. 10. Rasmussen-Cruz B, et al. Tobacco consumption and motives for use in Mexican university students. Adolescence. 2006;41(162):355-368. 11. Wolburg J. Misguided optimism among college student smokers: Leveraging their quit-smoking strategies for smoking cessation campaigns. J Cons Aff. 2009;43(2):305-331. 12. An LC, et al. Feasibility of Internet health screening to recruit college students to an online smoking cessation intervention. Nicotine Tob Res. 2007;9(Suppl 1):S11-S18. 13. Harris JB, et al. Characteristics associated with self-identification as a regular smoker and desire to quit among college students who smoke cigarettes. Nicotine Tob Res. 2008;10(1):69-76. 14. Torrijos RM, Glantz SA. The US public health service "treating tobacco use and dependence clinical practice guidelines" as a legal standard of care. Tob Control. 2006;15(6):447-451. 15. Friend K, Colby S. Healthcare providers' use of brief clinical interventions for adolescent smokers. Drugs: Education, Prevention and Policy. 2006;13(3):263-280. 16. Murphy-Hoefer R, et al. The influence of tobacco countermarketing ads on college students' knowledge, attitudes, and beliefs. J Am College Health. 2010;58(4):373-381. 17. Fiala KA, et al. Construct validity and reliability of college students' responses to the reasons for smoking scale. J Am College Health. 2010;58(6):571-577. 18. Russell MA, et al. The classification of smoking by factorial structure of motives. J Royal Stat Soc. 1974;137(3):313-346. 19. Ikard FF, et al. A scale to differentiate between types of smoking as related to the management of affect. Int J Addictions. 1969;4(4):649-659. 20. Berlin I, et al. The modified reasons for smoking scale: factorial structure, gender effects and relationship with nicotine dependence and smoking cessation in French smokers. Addiction. 2003;98(11):1575-1583. 21. Bourdrez H, De Bacquer D. A Dutch version of the modified reasons for smoking scale: factorial structure, reliability and validity. J Eval Clin Pract. 2011;18(4):799-806. 22. Abroms LC, et al. Getting young adults to quit smoking: a formative evaluation of the X-Pack program. Nicotine and Tob Res. 2008;10(1):27-33. 23. Rigotti NA, et al. Tobacco use by Massachusetts public college students: long term effect of the Massachusetts Tobacco Control Program. Tobacco Control. 2002;11(Suppl 2):20-24. 24. Waters K, et al. Characteristics of social smoking among college students. J Amer Coll Health. 2006;55(3):133-139. 25. American College Health Association. Position statement of tobacco on college and university campuses. American College Health Association. http://www.acha.org/Publications/docs/Position_Statement_on_Tobacco_Nov2011.pdf Carol Sternberger is the associate vice chancellor for faculty development at Indiana Unversity-Purdue University in Fort Wayne, Ind. Heather Krull is an assistant professor and family nurse practitioner at the same university. Diana Bantz is an associate professor of nursing at Ball State University in Muncie, Ind.
<urn:uuid:588d3a93-d214-4bf1-8608-7edf87f7ab2e>
seed
Physical Examination of the Knee A complete knee examination is always done for a knee complaint. Both of your knees will be checked, and the results for the injured knee will be compared to those of the healthy knee. Your doctor will also check that the nerves and blood vessels are intact. Your doctor will: - Inspect your knee visually for redness, swelling, deformity, or skin changes. - Feel your knee (palpation) for warmth or coolness, swelling, tenderness, blood flow, and sensation. - Test your knee's range of motion and listen for sounds. In a passive test, your doctor will move your leg and knee joint. In an active test, you will use your muscles to move your leg and knee joint. At the same time, your doctor will listen for popping, grinding, or clicking sounds. - Check your knee ligaments, which stabilize the knee. Tests - The valgus and varus tests, which check the medial and lateral collateral ligaments. In these tests, while you lie on the examining table, your doctor places one hand on your knee joint and the other on your ankle and moves your leg side to side. - The posterior drawer test, which checks the posterior cruciate ligament. In this test, you lie on the table with your knee bent at a 90-degree angle and your foot flat on the table. Your doctor will put his or her hands around the top of your leg just below your knee and push straight back on your leg. - The Lachman test, which checks the anterior cruciate ligament (ACL). In this test, while you lie on the table, your doctor will slightly bend your knee and hold your thigh with one hand. With the other hand, he or she will hold the upper part of your calf and pull forward. The Lachman test diagnoses a complete ACL tear. - The anterior drawer test, which checks the ACL. In this test, you lie on the table with your knee bent at a 90-degree angle and your foot flat on the table. Your doctor will put his or her hands around the top of your leg just below your knee and pull straight back on your leg. - A pivot shift test, which checks the ACL. In this test, the leg is extended and your doctor holds your calf with one hand while twisting the knee and pushing toward the body. It is often done just before a knee arthroscopy and after anesthesia has completely relaxed the muscles. A McMurray test may be done if your doctor suspects a problem with the menisci based on your medical history and the above exams. In this test, while you lie on the table, your doctor holds your knee and the bottom of your foot. He or she then pushes your leg up (bending your knee) while turning the leg and pressing on the knee. If there is pain and the sound or feeling of a click, the menisci may be damaged. Arthrometric testing of the knee may also be done. In this test, your doctor will use an instrument to measure the looseness of your knee. This test is especially useful in people whose pain or physical size makes a physical exam difficult. An arthrometer has two sensor pads and a pressure handle that allows your doctor to put force on the knee. The instrument is strapped on to your lower leg so that the sensor pads are placed on the knee cap and the small bump just below it (tibial tubercle). Your doctor then measures pressure by pulling or pushing on the pressure handle. Your exam may also include other tests to assess the degree of the injury and to identify damage to other parts of the knee. Why It Is Done A complete physical exam of the knee is always done for a knee complaint, whether the complaint is from a recent or sudden (acute) injury or from long-lasting or recurrent (chronic) symptoms. In general, in a normal knee exam: - The knee has its natural strength. - The knee is not tender when touched. - Both knees look and move the same way. - There are no signs of fluid in or around the knee joint. - The knee and leg move normally when the ligaments are examined. - There is no abnormal clicking, popping, or grinding when knee structures are moved or stressed. - The toes are pink and warm, and there is no numbness in the lower leg or foot. If any of these findings are not true—for example, the knee is tender—you may have a knee injury. But the results of a knee exam vary depending on whether the exam is for a sudden injury to the knee or for long-term symptoms and also depending on how long it has been since the injury occurred. An abnormal finding does not always mean that your knee is injured. Your doctor will use the results of the exam, plus your medical history, to make a diagnosis. What To Think About These tests provide the best information if there is little or no knee swelling, you are able to relax, and your doctor is able to move your knee and leg freely. If this is not the case, it may be difficult to accurately check your knee. If your knee is red, hot, or very swollen, a knee joint aspiration (arthrocentesis) may be done, which involves removing fluid from the knee joint. This is done to: - Help relieve pain and pressure, which may make the physical exam easier and make you more comfortable. - Check joint fluid for possible infection or inflammation. - See if there is blood in the joint fluid, which may indicate a tear in a ligament or cartilage. - See if there are drops of fat, which may indicate a broken bone. Local anesthetic may be injected after aspiration to reduce pain and make the exam easier. If you are going to have arthroscopy, the knee may be examined in the operating room before the procedure, while you are under general or spinal anesthesia. Last Revised: April 5, 2012 Author: Healthwise Staff To learn more visit Healthwise.org
<urn:uuid:6f6844ec-7fca-4b19-8cc9-35f4085d92bc>
seed
Microcephaly is an important neurologic sign, but there is no uniformity in its definition and evaluation. Microcephaly may result from any insult that disturbs early brain growth and can be seen in association with hundreds of genetic syndromes. Annually, approximately 25,000 infants in the US will be diagnosed with microcephaly. A new AAN guideline, "Practice Parameter: Evaluation of the Child with Microcephaly (an evidence-based review)," co-developed in full collaboration with the Child Neurology Society, was published in the September 15 issue of Neurology® (2009; 73:11). AAN.com asked the lead authors, Stephen Ashwal and David Michelson, to discuss how the guideline was developed. They spoke with AAN.com Science Editor Jose G. Merino, MD, MPhil. AAN.com: How was this practice parameter developed? Authors: The practice parameter was prepared over a two-year period using the process developed by the AAN Quality Standards Subcommittee. As such it was reviewed by about 80 individuals and went through five revisions. We followed the strict and standardized format using evidence-based medicine methods in which the strength of the recommendations was based on the strength of the evidence. AAN.com: Can you briefly summarize the criteria for the diagnosis of microcephaly? Authors: Microcephaly is usually defined as a head circumference (HC) of more than two standard deviations (SDs) below the mean for age and gender, although some academics have advocated for defining "severe" microcephaly as an HC more than three SDs below the mean. Microcephaly, particularly when severe or postnatally acquired, is frequently found in children with developmental delays, cognitive impairments, cerebral palsy, and epilepsy. AAN.com: How often should neurologists and pediatricians measure head circumference, and until what age? Authors: The head circumference should be measured at birth and then serially, along with other growth parameters, until three to five years of age. As we categorize microcephaly into two major groups (congenital and postnatal), you can see that there is a need for ongoing HC measurements. Most children who develop microcephaly present within the first two to three years of life. Accurate HC measurements can be obtained with a flexible but non-stretchable measuring tape pulled tightly across the most prominent part of the back (occiput) and front (supraorbital ridges) of the head. Standardized growth charts in percentiles for boys and girls from birth to age 36 months are available online from the website of the National Center for Health Statistics. The appendix in the parameter contains links to a website that provides head circumference charts. AAN.com: How should clinicians approach the evaluation of children with a small head circumference? Authors: If a child is found to have a head circumference that is below normal for age, the physician should first verify the accuracy of the measurement and then verify that the measurement was plotted appropriately for age and gender. If the child's head size is more than two SD below normal at any time in childhood, an evaluation should be considered, but what one does depends on several clinical factors that are discussed in the parameter. AAN.com: When should neurologists and pediatricians obtain neuroimaging studies and genetic tests? Authors: Diagnostic testing is important to try to ascertain the etiology, as it can help with prognosis about future associated risks of coexistent conditions and also help the family understand what their risks are for having additional children who might be similarly affected. Ultimately, one must use one's clinical judgment in deciding what testing should be done. When there is evidence by history or examination of a specific cause for microcephaly, further testing may be done for confirmation. When microcephaly remains unexplained, however, current data support the consideration of diagnostic tests such as neuroimaging studies and targeted genetic testing. It is likely that those having a family history of benign microcephaly or microcephaly that is proportionate with height and weight are less likely to have developmental consequences, but at this point we recommend monitoring for developmental and neurological abnormalities. We have included two algorithms in the parameter that outline a stepwise approach to how a work-up can be done. AAN.com: What are the most frequent environmental causes of microcephaly? Authors: The clearest prenatal causes are maternal use of alcohol, maternal use of certain medications, and poorly controlled maternal phenylketonuria. It is also possible that maternal tobacco smoking, substance abuse, and poorly controlled diabetes might contribute to the development of congenital microcephaly. In older children, lead poisoning and chronic renal failure are known to cause microcephaly (though the former is currently very rare), but it is likely that almost any type of chronic disease can contribute to the development of microcephaly. AAN.com: What are the most common genetic syndromes associated with small head circumference? Authors: The most common syndromic cause of microcephaly is Down syndrome, in which microcephaly is seen in 30 percent, though the diagnosis is usually readily made based on other clinical features. Other common genetic syndromes associated with microcephaly include Rett syndrome (commonly due to mutations in the MeCP2 gene) and Angelman syndrome (due to loss of function of the maternal UBE3A gene in the 15q11-13 region); these are sometimes less recognizable. A finding of microcephaly can contribute to the suspicion for these disorders. Dozens of other relatively rare genetic causes of both syndromic and non-syndromic microcephaly have been described, and there are tables and appendices in the parameter that review this information. (Read a general overview of this condition.) AAN.com: Is there a role for genetic counseling for parents of children with microcephaly? Authors: There is a definite role for genetic counseling for families who have a child with microcephaly. As the fields of neuroimaging and genetic testing continue to advance over the next decade, it is clear that we will be able to provide increasingly detailed information regarding the causes and recurrence risks for microcephaly. Parents appreciate physicians' efforts to "find out what is wrong with their child." AAN.com: What studies are required to refine these guidelines and strengthen the level of evidence for the next revision? Authors: There is a need for studies to better define the prevalence of congenital and postnatal microcephaly and to establish whether the significance of microcephaly is altered by ethnic background, a history of prematurity, head shape, and parental head size. There is also a need for neuropsychological, neuroimaging, genetic, metabolic, neurophysiologic (i.e., EEG), and ancillary (vision and hearing) testing to establish the diagnostic yields such testing and inform the development of a better evidence-based algorithmic approach to evaluation. Dr. Ashwal serves on the scientific advisory boards of the Tuberous Sclerosis Association and the International Pediatric Stroke Society; he is also an editor of Pediatric Neurology. In addition, he receives research support from the NIH. Dr. Michelson has nothing to disclose. Dr. Merino performed a one-time consultation with staff from Bell, Falla and Associates. He is the AAN.com Associate Website Editor for Science.
<urn:uuid:4c577339-4842-4598-b7f1-43d9754ddf90>
seed
A key challenge in developing a fully implantable hearing aid is designing a microphone that will work effectively under the skin. Bedoya notes that the properties of human skin change throughout the day with the user’s hydration levels and other factors, and he hinted that the company is developing technology to detect those changes and adjust to them. He also points out that the location of the microphone behind the ear is an important factor that can be fine-tuned. Outside experts see significant progress being made in implantable microphone design. Joseph Roberson, an ear surgeon and the CEO of the California Ear Institute, in Palo Alto, CA, says, “I listened to a good-fidelity musical signal received by an implantable microphone positioned under half an inch of raw steak.” The functional outcome of the Otologics device, he says, is “roughly equivalent to existing visible external technology.” But critics question whether Otologics can match the performance of conventional hearing aids, and they ask whether the new device is worth the surgical risk and the cost ($19,000 in Europe, excluding the cost of the surgery, versus $6,000 for a high-end conventional aid; the device is available in Europe but still in clinical trials in the United States). Gerald Loeb, a professor of biomedical engineering at the University of Southern California, argues that implanted hearing aids should outperform conventional ones before they can be considered worth the extra cost and risk. He also questions the emphasis on making an invisible device: “How big an issue is it to have a little appliance on your ear when the whole world is walking around with cell-phone headsets and iPod earpieces?” Nonetheless, the phase I study concluded that the Otologics device “serves as a viable treatment alternative for moderate to severe sensorineural hearing loss.” Bedoya says that the company is addressing the problems found by the study and preparing for phase II trials, in which 90 subjects will be tested with a revised device. Roberson suggests that the device may be most suitable for “alpha adopters … who are motivated to keep their use of a hearing device a private matter, or those who are intolerant of standard hearing-aid technology.” Silicon Valley executives, he thinks, may be first in line. Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human.
<urn:uuid:39ea1bc5-541c-4b61-bdb4-70d3a8dbad83>
seed
Skip Internal Navigation The study of gene regulation is a prerequisite for understanding how cells respond appropriately to a changing environment, how they implement developmental programs, and how a defect in gene regulation can result in carcinogenesis. For many years it was thought that gene regulation involved only transcription factors and their interactions with DNA; changes in the chromatin structure of a gene were considered to be the passive consequence of the binding of these factors. However, it is now clear that chromatin structure is an integral part of the process of gene regulation. Gene activation involves the recruitment of a set of factors to a promoter in response to appropriate signals, ultimately resulting in the formation of an initiation complex by RNA polymerase II and transcription. These events must occur in the presence of nucleosomes, which are compact structures capable of blocking transcription at every step. To circumvent this chromatin block, eukaryotic cells possess ATP-dependent chromatin remodelling machines and nucleosome modifying complexes. The former (e.g. the SWI/SNF complex) use ATP to drive conformational changes in nucleosomes and to move nucleosomes along DNA. The latter contain enzymatic activities which modify the histones post-translationally to alter their DNA-binding properties and to mark them for recognition by other complexes, which have activating or repressive roles (the basis of the "histone code"). Nucleosome modifying enzymes include histone acetylases (HATs), deacetylases (HDACs), methylases, kinases and ubiquitin conjugating enzymes. The current excitement in the chromatin field reflects the recognition that chromatin structure is of central importance in gene regulation and that the cell has dedicated complex systems to manipulate the repressive properties of chromatin structure to maximum effect. Furthermore, multiple connections between chromatin and disease are apparent. Many low resolution studies of chromatin structure have indicated that major changes in chromatin structure occur at promoters and at other regulatory elements of genes, but whether nucleosomes were conformationally altered, moved around, or simply removed was unclear. Current models are designed to account for these observations: they propose that the primary function of remodelling complexes is to convert the chromatin structure of a promoter to a state conducive to transcript initiation. In contrast, our high resolution chromatin studies indicate that, at least for two yeast genes, the chromatin structure of the entire gene is remodelled, not just the promoter, with important implications for mechanisms of gene regulation. There are three ongoing projects in the lab. For more information, please select one of the following: The yeast HIS3 gene: SWI/SNF complex and domain remodelling. We have developed a model system to investigate the remodelling and histone modifications that occur on gene activation in the yeast Saccharomyces cerevisiae . We chose to study yeast because biochemical studies of chromatin structure could be combined with molecular genetics. We purify native plasmid chromatin containing a chosen gene expressed at basal or activated levels of transcription from yeast cells. This work was inspired by that of Dr. Bob Simpson. We have employed high resolution methods to elucidate the structure of transcriptionally active native chromatin. Our studies have provided a very detailed and quite surprising picture of the events occurring in the chromatin structure of yeast genes when they are activated for transcription. Initially, we chose the CUP1 gene for our studies 1, 2, 3, because its regulation is well understood, with well-defined basal and activated states. CUP1 encodes a metallothionein required to protect cells from the toxic effects of copper. Much later, it became clear that CUP1 was not the ideal choice, because the remodelling factors acting at CUP1 had not yet been identified, making it difficult to exploit the genetic advantages of yeast. Accordingly, we decided to study a second well-characterised gene, HIS3, for which this information is available. HIS3 encodes an enzyme required for histidine metabolism and is induced by amino acid starvation. HIS3 is activated by the transcriptional activator Gcn4p and is regulated by the SAGA and NuA4 histone acetyltransferase (HAT) complexes) and by the SWI/SNF ATP-dependent remodelling machine. We discovered that induction of both CUP1 and HIS3 results in the creation of a domain of remodelled chromatin structure that extends far beyond the promoter, to include the entire gene 1, 4, 8. In the case of HIS3, induction results in a dramatic loss of nucleosomal supercoiling, a decompaction of the chromatin, a general increase in the accessibility of the chromatin to restriction enzymes and gene-wide mobilisation of nucleosomes. Formation of this domain of remodelled chromatin requires the SWI/SNF complex and the activator Gcn4p. The NURF-like remodelling complex, Isw1, also mobilises nucleosomes on HIS3, but to different positions. We propose that Gcn4p stimulates the activity of the SWI/SNF complex which then directs remodelling of the surrounding chromatin, generating a highly dynamic structure 8. We propose that this dynamic chromatin structure facilitates access to the DNA for both initiation and elongation factors, as well as promoting transcription through chromatin by RNA polymerase II (Figure 1). Our studies of HIS3 chromatin structure are at an exciting stage and are now focused on understanding the structure of transcriptionally active chromatin (see also ref. 9). In summary, our work on CUP1 and HIS3 indicates that, at least for these two genes, the target of remodelling complexes is a domain rather than just the promoter. This is an important finding, because it suggests that remodelling complexes act on chromatin domains. In a wider context, the fact that remodelling complexes can participate in the formation of chromatin domains ("gene expression neighbourhoods") might be important in understanding the formation of domains in higher eukaryotes (discussed in 5). Figure 1. A working model for the transcriptional activation of HIS3 chromatin. The chromatin structure of the HIS3 gene expressed at basal levels (in the absence of the Gcn4p activator) is characterised by a dominant array of positioned nucleosomes (D1-D5). Alternative (A) arrays composed of quantitatively minor positioned nucleosomes are also present, indicating heterogeneity in HIS3 chromatin structure, even in the basal state. We propose that basal HIS3 chromatin is essentially static in nature. In the presence of the Gcn4p activator, the activity of the SWI/SNF complex is stimulated, resulting in a net mobilisation of nucleosomes from the D-arrays to the A-arrays. The Isw1 complex also affects the distribution of the nucleosomes, particularly at the 3'-end of HIS3. The inference is that HIS3 chromatin structure is highly dynamic. The nucleosomal flux created by the competing activities of the various remodelling complexes should facilitate access to the DNA for both transcript initiation and elongation complexes. Note also that we have shown previously that HIS3 nucleosomes apparently undergo a major conformational change requiring both Gcn4p and the SWI/SNF complex 4 which might increase the transparency of the chromatin still further. Adapted from Ref. 8. The role of the Spt10 HAT-activator in cell cycle regulation of the yeast histone genes We have shown that nucleosomes on the CUP1 promoter are acetylated in response to induction by copper and that this targeted acetylation is dependent on Spt10p, a putative histone acetyltransferase (HAT) 3. SPT10 was originally identified as one of a set of SPT genes, mutations in which suppress phenotypes associated with insertion of a yeast transposable element into promoters. SPT10 is not an essential gene, but the null allele is associated with very slow growth and global defects in gene regulation. Spt10p activates the histone genes, which it regulates in conjunction with Spt21p, the Hir co-repressor and the SWI/SNF complex. We and others originally proposed that Spt10p might be a co-activator recruited to promoters by activators. However, we have shown recently that Spt10p is in fact a sequence-specific DNA binding protein that recognises the histone UAS elements [(G/A)TTCCN6TTCNC] 6. Spt10p appears to be the activator of the core histone genes, which has been sought after for many years. We found that it binds with high affinity and with extraordinary positive cooperativity to pairs of histone UAS elements. Since pairs of histone UAS elements are found only in the core histone promoters and nowhere else in the yeast genome, there are no other predicted sites for Spt10p binding. We have presented evidence that the effects of Spt10p on other genes are indirect, mediated through global defects in chromatin structure arising from a deficit of histones in spt10 cells 6. We are making rapid progress in understanding the biological functions of Spt10p 7, 10. Spt10p appears to be only the second example of a sequence-specific DNA binding domain fused to a HAT domain. However, Spt10p has not yet been shown to possess HAT activity in vitro. We are using a variety of approaches to identify the acetyltransferase activity of Spt10p. In addition, we are attempting to identify proteins which interact with Spt10p. Spt10p binds to the histone UAS elements and therefore should be classified as an activator rather than a co-activator, but it does not have a conventional activation domain. Usually activators recruit HAT enzymes as co-activators. In the case of Spt10p, we suggest that Spt10p recruits an activation domain. Our current aim is to place our observations in their biological context of S-phase regulated expression of the histone genes. Is the homologous human foamy integrase also a sequence-specific DNA binding protein? Recently, we identified the DNA-binding domain of Spt10p: it comprises about 110 residues and includes an H2-C2 zinc finger 7. A BLAST search identified a homologous H2-C2 zinc finger in the integrase of the human/Simian/primate foamy retrovirus (PFV). This zinc finger is conserved in all retroviruses, including HIV, and is thought to participate in protein-protein interactions. However, the homology of the foamy virus zinc finger with the DNA-binding domain of Spt10p suggests that the PFV zinc finger might in fact be a sequence-specific DNA-binding domain. We are addressing this possibility. - Shen, C.-H, Leblanc, B.P., Alfieri, J.A., and Clark, D.J. (2001). Remodelling of yeast CUP1 chromatin involves activator-dependent re-positioning of nucleosomes over the entire gene and flanking sequences. Mol. Cell. Biol. 21, 534-547. - Shen, C.-H., and Clark, D.J. (2001). DNA sequence plays a major role in determining nucleosome positions in yeast CUP1 chromatin. J. Biol. Chem. 276, 35209-35216. - Shen, C.-H., Leblanc, B.P., Neal, C., Akhavan, R., and Clark, D.J. (2002). Targeted histone acetylation at the yeast CUP1 promoter requires the transcriptional activator, the TATA boxes and the putative histone acetylase encoded by the SPT10 gene. Mol. Cell. Biol. 22, 6406-6416. - Kim, Y., Clark, D.J. (2002). SWI/SNF-dependent formation of a domain of labile chromatin structure at the yeast HIS3 gene. Proc. Nat. Acad. Sci. USA 99, 15381-15386. - Oliver, B., Parisi, M., and Clark, D.J. (2002). Gene expression neighbourhoods. J. Biology 1, 4. - Eriksson, P.R., Mendiratta, G., McLaughlin, N.B., Wolfsberg, T.G., Mario-Ramrez, L., Pompa, T.A., Jainerin, M., Landsman, D.L., Shen, C.-H., and Clark, D.J. (2005). Global regulation by the yeast Spt10 protein is mediated through chromatin structure and the histone UAS elements. Mol. Cell. Biol. 25, 9127-9137. - Mendiratta, G., Eriksson, P.R., Shen, C.-H., and Clark, D.J. (2006). The DNA-binding domain of the yeast transcription activator Spt10p includes a zinc finger that is homologous to foamy virus integrase. J. Biol. Chem. 281, 7040-7048. - Kim, Y., McLaughlin, N., Lindstrom, K., Tsukiyama, T., and Clark, D.J. (2006). Activation of Saccharomyces cerevisiae HIS3 results in Gcn4p-dependent, SWI/SNF-dependent mobilisation of nucleosomes over the entire gene. Mol. Cell. Biol. 26, 8607-8622. - Tong, W., Kulaeva, O.I., Clark, D.J., and Lutter, L.C. (2006). Topological analysis of plasmid chromatin from yeast and mammalian cells. J. Mol. Biol. 361, 813-822. - Mendiratta, G., Eriksson, P.R., and Clark, D.J. (2007). Cooperative binding of the yeast Spt10p activator to the histone upstream activating sequences is mediated through an N-terminal dimerisation domain. Nucleic Acids Res. 35, 812-821.
<urn:uuid:d3d8ef40-889b-497b-a8c2-8fa1d268a866>
seed
Erectile dysfunction (also known as impotence) is defined as an inability to achieve or sustain an erection that is adequate for sexual intercourse, an inability to ejaculate, or both. According to the National Institutes of Health, approximately 5 percent of men 40 years old and between 15 and 25 percent of men age 65 experience ED on a long-term basis. For a greater percentage of men, problems with an erection and sexual performance is an occasional occurrence and situational, mainly associated with drinking alcohol, emotional trauma, or physical exhaustion, for example. Erectile dysfunction in men is one of the most common sexual problems affecting males. However it can also be argued that erection problems also affect women, because erectile problems have an impact on both people in a sexual relationship. In fact, women generally are a significant portion of seekers of treatment for erectile dysfunction given the depression in men and other side effects that also impacts their life and quality of sexual relationship. The obvious symptom of erectile dysfunction is the failure to get and maintain an erection when required. Failure to achieve an erection less than 20 percent of the time is not unusual; however when that percentage is 50 percent or greater that means it is a sign of something more serious. A sign that ED has a psychological cause is generally when men can achieve full erections when asleep (known as “nocturnal erections”) but experience problems with an erection at other times, such as when they are looking to be sexually active. How An Erection Works How do erections work? A simplified explanation is that when a man becomes aroused either physically (e.g., kissing, touching the penis) or mentally (e.g., sexual fantasies) it triggers the brain to transmit electrical signals to the penis. At that point, a complex sequence of biochemical events take place that involve various enzymes, nitric oxide, and blood flow, resulting in an erection. If there is an interruption in or problem with the sequence of events, erectile dysfunction is typically the result. Erectile Dysfunction Causes What are the causes of ED? Most causes are related to physical or environmental factors, which may include but are not limited to the use of certain medications, presence of chronic health problem, poor habits (lifestyle and nutrition), fatigue, injury or damage, exposure to environmental toxins, and poor blood flow to the penis. ED may also be the result of medical conditions such as diabetes, high blood pressure or heart disease, low testosterone as well as procedures to treat BPH (enlarged prostate). ED is also a side effect of many of the treatments for prostate cancer. Erectile dysfunction in young men is often the result of vascular disorders such as heart disease, diabetes either juvenile or type 2, side effects of certain drugs and medications, and inadequate hormones like in hypogonadism, neurological problems or severe trauma. Another important cause of ED in young men is psychological impotence. Psychological Causes of ED Psychological causes of erectile dysfunction make up about 10 to 20 percent of cases. Depression or anxiety about past sexual performances, fear of not performing sexually and pressure to perform can both impact whether you achieve and maintain an erection sufficient for intercourse. Basically, if you “think” you cannot perform sexually, chances are this will hinder your sexual performance. Counseling can be effective in overcoming any psychological issues, and in fact it is often recommended that both the man and his sexual partner attend counseling for best results. Whether erectile problems are associated with physical or psychological causes, a wide variety of treatments are available. To get an accurate diagnosis of erectile dysfunction, you should consult a knowledgeable healthcare provider with whom you feel comfortable discussing your problems. Diagnosis of ED involves several steps, including a physical examination, collection of personal, medical, and sexual history, and any tests the doctor deems necessary to help identify the cause of erectile dysfunction. Once a physician has made a diagnosis and identified the underlying cause of a man’s erection problems, the best possible treatment options can be explored. Treatment for ED Men seeking treatment for erection problems will discover there are many remedies and treatment options from which they and their healthcare provider can choose, depending on the underlying cause of the erectile problem. The success of treatment depends on several factors, including age, the causes, overall health, use of other treatments, and the state of erectile function before any sexual problems started. ED in men can be treated with a variety of options, including pills and drugs such as sildenafil (Viagra), vardenafil (Levitra, Staxyn), avanafil (Stendra), and tadalafil (Cialis), devices such as and vacuum erection devices (penis pumps) and constriction bands, injections or suppositories containing alprostadil (MUSE), penile/penis implants, natural remedies (herbal and nutritional supplements), and counseling. More than one treatment for can be used at the same time, but men should always consult their healthcare provider before starting or combining treatments. Natural Remedies for ED Traditional treatment may also be supplemented with other natural remedies such as natural supplements and vitamins and lifestyle changes. Natural supplement remedies may include carnitine, catuaba, ginkgo biloba, ginseng, horny goat weed, maca, muira puama, tongat ali, pine bark and tribulus terrestris. There are also foods and other lifestyle changes to promote blood circulation, reduce stress and aid in promoting healthy blood flow, a key component in healthy erections and erectile dysfunction. What’s Wrong With 4-Hour Erections? Some men actually have the opposite problem in relation to their erections and this can be a serious condition. The ads on TV for the major ED drugs always strongly advise you to “seek medical attention if you have an erection lasting longer than four-hours”. A serious side effect from drugs and other treatments can be a prolonged erection (known as priapism; an erection lasting longer than four (4) hours). A prolonged erection may also be caused by other treatments such as injections and MUSE suppositories. Priapism is a serious medical condition, because the penis is deprived of oxygen, which damages and destroys erectile tissue. These warnings should be taken very seriously and medical attention should be sought immediately. If you are on certain medications or treatments then your doctor may give you an information card similar to the Sloan-Kettering Wallet Card for patients using penile injections. This card details the serious nature of priapism and advises the attending medical professional of treatment options and other information.
<urn:uuid:a457ad0b-ddaf-4fb4-981f-28e283b520ed>
seed
In England, about three-quarters of men and two-thirds of women appear to be "resistant" to the trend of increasing body mass index (BMI), British researchers reported. The finding may explain why, in some Western countries, the trend toward increasing BMI has slowed, according to Andrew Renehan, MBBCh, PhD, of the University of Manchester in England, and colleagues. And it suggests that targeted approaches to reducing obesity might be more effective than population-wide interventions, Renehan and colleagues argued online in the International Journal of Obesity. Excess body weight is usually defined as a BMI of at least 25 kg/m2 of height. The prevalence of BMI greater than 25 kg/m2 has been rising for decades, although there is some evidence the trend may be slowing. Renehan and colleagues used a mixture modeling method, which can identify subpopulations within an overall population, to analyze BMI data collected by the annual cross-sectional Health Surveys for England from 1992 through 2010. The health survey data included BMI information on 164,166 adults, including 76,382 men and 87,773 women, ages 20 through 74. Over the 19-year study period, the investigators found, the median BMI for men rose from 25.6 kg/m2 in 1992 to 27.5 kg/m2 in 2010. Similarly, in women it rose from 24.5 kg/m2 in 1992 to 26.5 kg/m2 in 2010. But a splines regression analysis showed that 2001 was a pivot year. Before then, median BMI was rising annually at about 0.140 and 0.139 25 kg/m2 for men and women, respectively. But after 2001, the increases were much smaller -- 0.038 and 0.055 25 kg/m2 for men and women, respectively. In both sexes, the test for difference was significant (P<0.0001). The mixture model analysis identified two subpopulations, Renehan and colleagues reported. Among men, 23.5% were in the "high-BMI" subgroup, whose members continued to get fatter over time, while the remaining 76.5% had only minimal increases in BMI. Among women the proportions were 33.7% and 66.3%. In the high-BMI group, the observed increases in BMI would have equated to weight gains of between 0.6 and 1.02 kg/year for a man of average height (1.83 m or about 6'). Similarly, the BMI increases would have led to weight gains of about between 0.27 and 0.62 kg/year for a woman of average height (1.63 m or about 5'4"). On the other hand, men and women of average height in the normal BMI group would have gained about 0.16 and 0.08 kg/year, respectively, Renehan and colleagues reported. The researchers cautioned that the surveys captured BMI data for only about 63% of participants, raising the possibility of response bias. They also noted that the study does not have a true longitudinal cohort, but rather modeling was based on panel data of different individuals every year. Also, the prevalence of obesity "shows significant variation by ethnicity" but their study population was almost entirely Caucasian. Primary source: International Journal of Obesity Source reference: Sperrin M, et al "Slowing down of adult body mass index trend increases in England: A latent class analysis of cross-sectional surveys (1992-2010)" Int J Obesity 2013; DOI: 10.1038/ijo.2013.161.
<urn:uuid:ce265dba-7c39-42b7-a6cf-1e64a3acafdd>
seed
Pyridine alkaloids are similar to piperidine alkaloids except that their heterocyclic ringed, nitrogen containing nucleus is unsaturated. Two toxic alkaloids will be discussed here. The pyridine alkaloid, nicotine, and the piperidine-pyridine alkaloid, anabasine. Both of these are found in plants in the Nicotiana spp. which includes cultivated, wild, and tree tobacco. Domestic tobacco (N. tabacum ) is widely cultivated throughout the southeast United States. Wild tobaccos (N. attenuata and N. trigonophylla ) are upright, leafy evergreen plants found in sandy, arid regions of the western US. Tree tobacco (N. glauca ) is similar in appearance but can grow to be a small tree and is found at low elevations of Arizona and California. Tobacco has large, simple, alternate, bright green, often sticky , leaves. Its stems are often sticky and hairy. Tobacco flowers are organized in panicles. They have a tubular shape with 5 fused petals that flar at the mouth into 5 distinct lobes. they are fragrant and range in color from white to a very light pink, purple, or yellow. More information describing them is available under the listing for Nicotiana tabacum, Tobacco, in the Canadian Poisonous Plants Information System, courtesy of Derek B. Munro. - acutely affects the nervous system by blocking autonomic ganglia and neuromuscular junctions. Livestock progress from: - shaking and twitching - rapid breathing - weakness and prostration - descending paralysis of the central nervous system - to death by respiratory failure. - Research indicates that anabasine is a teratogenic agent but nicotine is not. Poisoning due to consumption of tobacco leaves and stalks has been documented in cattle, horses, sheep, and swine as well as dogs and even humans (after consuming the leaves as boiled greens). Nicotine was a popular old time wormer and insecticide that occasionally poisoned livestock as well as its intended target. Swine will readily eat the soft pith of tobacco stalks and extreme care must be taken to keep them from gleaming tobacco fields or discarded stalks. Deformed offspring due to ingestion of the anabasine alkaloid in tobacco have been documented in cattle, sheep, and swine. These deformities are clinically the same as those caused by maternal consumption of lupine or poison hemlock (carpal flexure, cleft palates, arthrogryposis of the forelimbs and curvature of the spine). Wild and cultivated tobaccos contain some anabasine. However, @ 99% of the total alkaloid content of tree tobacco is anabasine.
<urn:uuid:0213a735-3f4f-4d2b-a3e7-6538dc701492>
seed
The diagnosis of myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is based on the patient's history, pattern of symptoms, and the exclusion of other fatiguing illnesses. A symptom-based diagnosis can be made with published criteria. This primer uses the 2003 Canadian clinical case definition for ME/CFS (see worksheet in the original guideline document), because of its emphasis on clearly described core symptoms of the illness. The 1994 Fukuda criteria for CFS (Appendix A in the original guideline document) are primarily used for research purposes, although they may be required for disability determinations in the US and elsewhere. The newly published 2011 International Consensus Criteria for ME are not yet in general use. No specific diagnostic laboratory test is currently available for ME/CFS, although potential biomarkers are under investigation. The diagnostic criteria for the 2003 case definition are listed in the clinical worksheet in the original guideline document and can be copied and used for patient diagnosis. The second page of the worksheet includes diseases which must be excluded or fully treated before a diagnosis of ME/CFS can be established. A number of non-exclusionary co-morbid entities which commonly co-exist with ME/CFS are also listed. Patients with ME/CFS may have many symptoms in addition to those listed in the case definition. A thorough medical and social history is essential for accurate diagnosis. Obtaining a succinct and coherent history within one visit may not be possible given the cognitive difficulties in some patients. The information gathered should include pre-illness functioning (education, job performance, social and family relationships) and current living circumstances (daily activities, stressors, major life changes, and support sources). Assessment of functioning will reveal the significant life changes experienced by the patient as a result of the illness. A review of previous medical records, reports, and lab tests supplied by the patient may also provide useful information. Physical findings are often subtle and may not be clearly evident. Patients may look pale and puffy with suborbital dark shadows or shiners. Examination of the patient's pharynx may show non-exudative pharyngitis (often referred to as "crimson crescents"). Cervical and axillary lymph nodes may be palpable and tender. Some patients have demonstrable orthostatic intolerance with neurally mediated hypotension or postural orthostatic tachycardia syndrome, characterized by lowered blood pressure and/or a tachycardia on prolonged standing. This may be associated with dependent rubor in the feet and pallor of the hands. A neurological examination may reveal a positive Romberg test or positive tandem stance test. If widespread pain is reported, a concurrent diagnosis of fibromyalgia should be considered and confirmed with a tender point examination. A basic laboratory investigation (Table 1, below) should be followed with more specific tests (Table 2, below) depending on particular symptoms. For example, an electrocardiogram (EKG/ECG) should be performed if chest pain is present, a chest x-ray obtained for cough, and testing for celiac disease if gastrointestinal symptoms are reported. (An endoscopy is recommended if symptoms are severe.) Results of routine tests in patients with ME/CFS are usually within the normal range even during severe relapses. If abnormalities are found (e.g., elevated erythrocyte sedimentation rate [ESR]), other diagnoses may be considered. Specific tests from Table 2 (below) may show low morning cortisol, elevated antinuclear antibody (ANA), and/or immunoglobulin abnormalities. In addition, vitamin D levels are often low, which would suggest bone density testing for osteoporosis. Any abnormal finding warrants further investigation to exclude other diseases. Research studies have reported a number of immune, neuroendocrine and brain abnormalities in patients with ME/CFS, but the clinical value of expensive and elaborate tests for these abnormalities has not been established. Table 1. Investigation of ME/CFS: Routine Laboratory Testing - Full blood count and differential - Erythrocyte sedimentation rate - Electrolytes: sodium, potassium, chloride, bicarbonate - Fasting glucose - C-reactive protein - Liver function: bilirubin, alkaline phosphatase (ALP), gamma glutamyl transaminase (GGT), alanine transaminase (ALT), aspartate transaminase (AST), albumin/globulin ratio - Renal function: urea, creatinine, glomerular filtration rate (eGFR) - Thyroid function: thyroid stimulating hormone (TSH), free thyroxine (free T4) - Iron studies: serum iron, iron-binding capacity, ferritin - Vitamin B12 and serum folate - Creatine kinase (CK) - 25-hydroxy-cholecalciferol (vitamin D) Table 2. Investigation of ME/CFS: Tests to Be Considered Depending on Symptoms - Antinuclear antibodies - Chest x-ray - Electrocardiogram (EKG/ECG) - Endoscopy: gastroscopy, colonoscopy, cystoscopy - Estradiol and follicle-stimulating hormone - Gastric emptying study - Gliadin and endomysial antibodies - Infectious disease screen if human immunodeficiency virus (HIV), hepatitis, Lyme disease, Q fever, etc. are possible - Microbiology: stools, throat, urine, sputum, genital - Morning cortisol - Magnetic resonance imaging (MRI) if multiple sclerosis suspected - Overnight polysomnogram and possibly multiple sleep latency test - Renin/aldosterone ratio - Rheumatoid factors - Serum amylase - Short adrenocorticotrophic hormone (ACTH) challenge test or Cortrosyn stimulation test - Tilt table test for autonomic function Although the symptoms of a number of diseases can mimic ME/CFS, the presence of post-exertional symptom exacerbation, a key feature of the illness, increases the likelihood of ME/CFS as the correct diagnosis. Table 3 in the original guideline document lists a number of medical conditions that need to be considered in the differential diagnosis. Distinguishing ME/CFS from Depressive and Anxiety Disorders Symptoms of depression or anxiety may result from or precede the illness as they do with other chronic medical conditions. Distinguishing depressive and anxiety disorders from ME/CFS may present a challenge. Depressive symptoms, including problems with sleep, cognition, and initiating activity as well as fatigue and appetite/weight changes may overlap with ME/CFS. Differential diagnosis is based on the identification of ME/CFS features—in particular, post-exertional malaise (PEM)—as well as autonomic, endocrine or immune symptoms (see Diagnostic Worksheet in the original guideline document). PEM is the exacerbation of symptoms following minimal physical or mental activity that can persist for hours, days or even weeks. For instance, a short walk may trigger a long-lasting symptom flare-up. By contrast, patients with major depression generally feel better after increased activity, exercise or focused mental effort. Furthermore, patients with ME/CFS (with or without co-morbid depression) generally have strong desires to be more active, but are unable to do so. In clinical depression, by comparison, there is often a pervasive loss of interest, motivation and/or enjoyment. Finally, diurnal fluctuations in ME/CFS tend to show symptom-worsening in the afternoon while in major depressive disorder more severe symptoms often occur in the morning. Some patients with ME/CFS do develop major depressive disorder which is characterized by low mood (loss of interest is less likely) and additional symptoms such as feelings of worthlessness or guilt and suicidal ideation. The practitioner should conduct a suicide evaluation for all patients who appear to be clinically depressed or highly stressed. Secondary anxiety can arise with the crisis of illness onset and persist as the illness affects the ability to work and family relationships. Secondary anxiety may be distinguished from generalized anxiety disorder (GAD). GAD is characterized by excessive worry and assorted physical symptoms. By comparison, panic disorder features unbidden panic attacks. Symptoms of ME/CFS not found in GAD and panic disorders include post-exertional malaise as well as autonomic, endocrine or immune symptoms (see Diagnostic Worksheet in the original guideline document). In addition, patients with primary anxiety disorders generally feel better after exercise whereas exercise worsens symptoms in ME/CFS. Finally panic disorder is situational and each episode is short-lived, whereas ME/CFS persists for years. Exclusionary Medical Conditions (Table 3 in the original guideline document) ME/CFS is not diagnosed if the patient has an identifiable medical or psychiatric condition that could plausibly account for the presenting symptoms. However, if ME/CFS symptoms persist after adequate treatment of the exclusionary illness, then a diagnosis of ME/CFS can subsequently be made. Co-existing Medical Conditions (Table 4 in the original guideline document) A number of other (non-exclusionary) conditions may co-exist with ME/CFS. A listing of these conditions appears in Table 4 in the original guideline document and includes fibromyalgia, multiple chemical sensitivity, irritable bowel syndrome, irritable bladder syndrome, interstitial cystitis, temporomandibular joint syndrome, migraine headache, allergies, thyroiditis, Sicca syndrome, Raynaud's phenomenon, and prolapsed mitral valve. These conditions should be investigated in their own right and treated appropriately. The onset of ME/CFS impacts the individual's ability to work, to sustain family and social relationships, to provide basic self-care, and to maintain self-identity. These sudden losses may trigger confusion and crisis. Yet patients often receive little benefit from consultations with health practitioners due to (1) physician skepticism of individuals with ME/CFS who may not look ill and show normal findings on standard physical examinations and laboratory tests; and (2) the absence of a clear standard of care for these patients. These obstacles, in addition to significant illness limitations and unsupportive family and friends, may lead to patients feeling demoralized, frustrated and angry. This section provides recommendations primarily for ambulatory patients who are able to attend office visits. Special considerations are offered in the "Related Clinical Concerns" section for the perhaps 25% of patients with ME/CFS who are bedridden, house-bound, or wheelchair dependent. Approach to Treatment Given the absence of curative treatments, clinical care of ME/CFS involves treating symptoms and guiding patient self-management. The goal is symptom reduction and quality of life improvement based on a collaborative therapeutic relationship. Although not all patients will improve, the potential for improvement, which ranges from modest to substantial, should be clearly communicated to the patient. Acknowledging that the patient's illness is real will facilitate a therapeutic alliance and the development of an effective management plan. Thus, patients may be greatly relieved to hear that their bewildering symptoms have a diagnostic label—an important validation of their concerns. The practitioner can also assure the patient that normal findings on diagnostic tests do not negate the reality of the illness. Once the diagnosis is established, a systems review will reveal the patient's most troublesome symptoms and concerns. These may include several of the following: debilitating fatigue and activity limitations; sleep disturbance; pain; cognitive problems; emotional distress; orthostatic intolerance; gastro-intestinal or urological symptoms; gynecological problems. The clinical management plan in this section focuses on both non-pharmacologic interventions and medications. Written educational material for patients can also be helpful because they may have short-term memory problems. To improve clinical management, the following are suggested: - A patient support person to take down medical advice or a recording of the visit for later patient review - Obtaining a written list of the patient's most troublesome symptoms - Agreement with the patient to focus on a limited number of selected symptom(s) in order to avoid overloading the patient - Medication doses that start low and go slow - Ongoing assessments of the patient over multiple visits The order of ME/CFS symptoms presented below starts with those considered most treatable. The non-restorative sleep in ME/CFS indicates waking up feeling unrefreshed or feeling as tired as the night before. The unrefreshed feeling may be associated with morning stiffness or soreness and mental fogginess that may last for an hour or two. Disturbed sleep patterns include difficulty falling or staying asleep, frequent awakenings, or coma-like sleep. Hypersomnia may occur in the early stages of the illness. Many patients have a diagnosable sleep disorder that may require consultation with a sleep disorder specialist. The following sleep hygiene suggestions may be helpful to patients: - An hour of relaxing wind-down activities prior to bed time - Regular sleep and wake times - Pacing activities during the day to avoid symptom exacerbation that may interfere with sleep - Avoiding naps after 3 pm and substituting rest - Spending some morning time under full spectrum light either outdoors, by a window, or artificial light - Reducing or eliminating caffeine-containing beverages and food - Using earplugs or soundproofing for noise, or sleeping in a different bedroom without (a snoring) partner - Ensuring the bedroom is very dark by using a sleep mask or black-out curtains - If unable to sleep, getting up and moving to another room, and doing a quiet activity (reading, soft music, or relaxation tapes; not a computer, iPad, or TV) until sleepy - Using the bed for sleeping and sex only - Avoiding attempts to force sleep Medications (Table 5 in the original guideline document) All sedating medications must be safe for long-term use and should be started at a low dose. The medication should be taken early enough so that sedation takes effect around bed time. Patients may initially feel thick-headed in the morning, but this usually improves as benefits become apparent. The risk of side effects and drug combinations which can produce serotonin syndrome should be explained. In some patients, tolerance may develop with medications. Rotating medications may be more effective than using a single drug. Persistent pain in ME/CFS, whether widespread or localized, may range from mild to severe. In some cases the patient may feel pain from minimal stimulation such as a gentle touch. Headaches may be particularly troublesome and are often migrainous. If chronic widespread pain is a major complaint, a fibromyalgia evaluation may be indicated. Helpful non-pharmacologic interventions for pain may include pacing of activity, physical therapy, stretches, massage, acupuncture, hydrotherapy, chiropractic, yoga, Tai Chi and meditation (relaxation response). Also consider hot or cold packs, warm baths or balneotherapy, muscle liniments, electrical massagers, TENS (transcutaneous electrical nerve stimulation), and rTMS (transcranial magnetic stimulation). These methods can be effective singly or in various combinations to reduce tension and pain. However, these interventions may also be poorly tolerated, inaccessible, or prohibitively costly. It is important to treat localized pain, e.g., arthritis or migraine, because it can amplify the generalized pain of ME/CFS. Medications (Table 6 in the original guideline document) For the treatment of pain in ME/CFS, the lowest effective dose should be prescribed and increased cautiously. Patients with severe pain may need the stronger analgesics and narcotics. Although opiates should be discouraged for the treatment of chronic pain states, they may be beneficial in some cases. Their use requires a clear rationale with documentation. Providers should consider referring such patients to a pain specialist. Fatigue and Post-exertional Malaise Patients with ME/CFS experience abnormal fatigue that is both more intense and qualitatively different from normal tiredness. The fatigue in ME/CFS may take several different forms: post-exertional fatigue (abnormal exhaustion or muscle weakness following minor physical activity), persistent flu-like feelings, brain fog (mental exhaustion from everyday cognitive effort), and wired fatigue (feeling over-stimulated when very tired). The type of fatigue that is a core feature of ME/CFS is post-exertional malaise (PEM). PEM is the exacerbation of fatigue and other symptoms (e.g., cognitive difficulties, sore throat, insomnia) following minimal physical or mental activity that can persist for hours, days or even weeks. PEM may be related to abnormal energy metabolism. Energy for physical activities is produced through two physiological systems: (1) Anaerobic metabolism is the predominant metabolic pathway during the first 90 seconds of exercise; (2) the aerobic/oxidative system is the primary source of energy during physical activities lasting longer than 90 seconds. Because most daily physical activities exceed 90 seconds, the aerobic system is typically utilized to produce the energy-releasing nucleotide, adenosine triphosphate (ATP) at a steady rate in order to perform activities of daily living. In patients with ME/CFS, aerobic metabolism may be impaired. Thus, any physical exertion exceeding 90 seconds may utilize a dysfunctional aerobic system, which leads to increased reliance on anaerobic metabolism. This imbalance may be linked to the prolonged symptoms and functional deficits associated with PEM. Simple and inexpensive physiological measures, such as heart rate monitoring, may be used to ensure that real-time cardiovascular responses remain below the threshold of aerobic impairment. Managing Post-exertional Symptoms: Pacing and the Energy Envelope Fatigue improvement can be facilitated by advising patients to pace or "spread out" activities so that ongoing exertion remains below the threshold of post-exertional symptom flare-ups (see Figure 2 in the original guideline document). For instance, rather than completing housework in one uninterrupted push, tasks may be divided into smaller pieces with rest intervals interspersed. Remaining as active as possible while avoiding fatigue-worsening over-exertion delineates an optimal zone of activity termed the "energy envelope." An activity log (Appendix D in the original guideline document) may be helpful to identify personal activities that stay within or exceed that optimal range. Activity and Exercise To stay within the energy envelope, some patients need to decrease their activity while others need to carefully and selectively do more. Many individuals with ME/CFS mistakenly over-exercise in an attempt to reduce fatigue and other symptoms. In addition, well-meaning healthcare providers may recommend exercise for patients with ME/CFS using guidelines intended for healthy people. Such guidelines are generally inappropriate and often counterproductive in this illness. Thus, practitioners may push patients too hard and patients may push themselves into activities that worsen symptoms. This symptom-worsening may be linked to underlying aerobic impairment. Misdirected exercise usually results in post-exertional symptom flare-ups or relapses which discourage further exercise. In contrast, the optimal amount of individualized exercise is usually well below standard recommendations for healthy individuals, avoids post-exertional symptoms, and promotes improvement. An individualized activity plan should be developed in collaboration with the patient. Consultation with rehabilitation professionals knowledgeable about ME/CFS may also be desirable. Any exercise or activity program should seek to minimize the negative effects of exertion on impaired aerobic function. Exercise should also not take priority over activities of daily living. Initially, the patient's degree of activity limitation can be estimated using a functional status rating such as the Functional Capacity Scale (Appendix C in the original guideline document). This 10 point scale ranges from 10, for symptom free individuals, to 1, for patients who are bedridden and unable to perform activities of daily living. Severely Ill Patients (Functional Capacity Rating 1-3; Appendix C in the original guideline document) Homebound and bedbound patients may benefit from in-home services that provide assisted range-of-motion and strengthening exercises. Exercise lying down should be advised when exercise standing or sitting is poorly tolerated. Initially, interval training exercise should begin with gentle stretching to improve mobility utilizing intervals of 90 seconds or less. The patient should rest between intervals until complete recovery has occurred. Additional intervals can be added when the stretching exercises do not trigger post-exertional symptoms. Then, resistance training can begin (functional capacity rating 4-5) with elastic bands or light weights. As endurance improves, short-duration interval training such as leisurely-paced walking can be added. Higher Functioning Patients (Functional Capacity Rating 5-9; Appendix C in the original guideline document) Interval training can begin with leisurely paced walking, swimming, or pedaling on an exercise cycle. The initial duration may vary from 5-15 minutes a day depending on how much the patient can do without provoking symptom flares. These higher functioning patients may also benefit from adaptive yoga and Tai Chi. Medications for Fatigue and Post-exertional Symptoms (Table 7 in the original guideline document) Due to prescribing difficulties, cost, and limited effectiveness, medications for fatigue may need to be reserved for functional assistance at special, but potentially exhausting events in the patient's life (e.g., a wedding or a concert). If the medication is effective, patients should avoid exceeding their individual activity limit, as this is likely to provoke symptom-worsening. Thus, careful monitoring of activity is recommended. The patient's cognitive difficulties can be managed to some extent with the following suggestions: - Using a "memory book" to write things down in one place (and attempt not to lose the book) - Developing habits such as leaving keys or glasses or always parking in the same spot - When possible, avoiding situations involving multisensory bombardment and fast-paced activity - Limiting the duration and intensity of cognitive efforts (a form of pacing) - Limiting or stopping cognitive efforts when cognitive symptoms flare up Medications for Cognitive Problems (Table 8 in the original guideline document) Stimulants seem to work best when the patient describes excessive "sleepiness" during the day as opposed to "tiredness." Sleepiness is suggested by a score of >10 on the Epworth sleepiness scale which may warrant a workup for primary sleep disorders. Depression, Anxiety and Distress The prevalence of clinical depression and/or anxiety in patients with ME/CFS is about 40%. This is similar to the rates of psychiatric symptoms in other chronic conditions such as arthritis. Patients may develop depression, anxiety, or stress reactions secondary to the illness or evidence a history of depression/anxiety prior to illness onset. The practitioner should conduct a suicide evaluation for all patients who appear to be clinically depressed or highly stressed. Managing Depression, Anxiety and Distress: Support, Coping Skills and Pleasant Experiences These types of interventions may be helpful: - Educating family members about the illness so that they can provide useful assistance and support - Identifying and scheduling pleasurable low effort activities (music, recorded relaxation, observing nature) which can generate well-being, reduce symptoms of anxiety, depression and distress and lessen fatigue as well - Developing coping skills, such as cognitive strategies to reduce anger, worry, and catastrophizing, as well as skills to improve tolerance of this difficult illness. Good resources are available to guide ME/CFS patients with effective coping skills - Referral, if needed, to supportive counseling, preferably to a professional familiar with ME/CFS - Referral to a ME/CFS support group or volunteer services. Successful support groups have effective leadership and positive programming that avoids simply exchanging complaints Medications for Depression For patients who are clinically depressed, medication can sometimes improve mood and reduce fatigue. Medications should be started at a low dose. Improvement may take several weeks. Possible side effects of antidepressants, notably sedation and orthostatic hypotension, may worsen fatigue and autonomic lability in some patients. Drug choice is often based on side effects profile and the patient's response. Cognitive Behavioral Therapy (CBT) CBT is a much publicized and debated psychotherapeutic intervention for ME/CFS that addresses the interactions between thinking, feeling and behavior. It focuses on current problems and follows a structured style of intervention that usually includes a graded activity program. CBT may improve coping strategies and/or assist in rehabilitation, but the premise that cognitive therapy (e.g., changing "illness beliefs") and graded activity can "reverse" or cure the illness is not supported by post-intervention outcome data. In routine medical practice, CBT has not yielded clinically significant outcomes for patients with ME/CFS. Furthermore, the lack of CBT providers who specialize in this illness (psychologist, social worker, or nurse) indicates that CBT may not be an option for many patients with ME/CFS. More detailed information on CBT protocols and the controversy surrounding its application in ME/CFS is presented elsewhere. Management of Related Conditions See section 5:8 of the original guideline for information on managing the following related conditions: - Orthostatic intolerance (OI) and cardiovascular symptoms - Gastrointestinal problems - Urinary problems - Multiple chemical sensitivity (MCS) - Infections and immunological factors Although no evidence-based special diet is available for ME/CFS, dietary programs are popular with many patients. Good nutrition with a sensible, balanced diet is advisable. Excesses of specific foods as well as rich, fatty foods, sugars and caffeine are best avoided. Eating small meals with snacking in between can be helpful. To help counteract the risk of osteoporosis from lack of vitamin D, dairy products should be incorporated in the diet if lactose intolerance or an allergic reaction to milk and milk products is not present. In addition, because alcohol intolerance (causing sedation) may be reported, alcohol use should be minimized or avoided. Some individuals who attribute their ME/CFS to food intolerances will carefully avoid certain foods. Gluten and/or lactose intolerances, not uncommon in ME/CFS, require a gluten- or lactose-free diet. Provided that these intolerances have been excluded, a rotational approach, rather than absolute avoidance, may lessen possible negative reactions to food. Although there is no evidence that patients with ME/CFS suffer from systemic candidiasis, diets intended to combat candidiasis and allergies are quite popular and many patients believe that they are helpful. Finally, some patients with gastrointestinal symptoms have reported benefit from a "leaky gut diet" in combination with L-glutamine or butyrate. Patients with ME/CFS need to ensure that they obtain at least the recommended daily allowance (RDA) of vitamins and minerals. This is not always possible using dietary sources. A suitable multivitamin and a separate multi-mineral preparation will ensure that at least the RDA of vitamins and minerals are obtained in the correct proportions. Because vitamin D deficiency is often found in ME/CFS, additional vitamin D may be necessary to achieve an optimal level, which may reduce the risk of osteoporosis, cancer, heart disease, stroke, and other illnesses. Vitamin B12 and B-Complex Given that cerebrospinal fluid levels of vitamin B12 may be depleted in some patients with ME/CFS, a trial of a weekly injection of hydroxycobalamin 1000 μg for six weeks (or perhaps longer) may be helpful. There are no reports of serious risk or side effects, despite the high blood levels achieved. A supplement of B-complex will avoid concurrent B vitamin deficiency. Essential Fatty Acids Essential fatty acids supplementation in ME/CFS has yielded symptom improvement and greater shifts towards normal levels of cell fatty acids concentration in treated patients in some studies. Eicosapentaenoic acid, an essential fatty acid, is a major component of omega-3 fish oil. This substance has been beneficial in reducing symptoms for some patients. Additional vitamin and mineral cofactors, including biotin, niacin, folic acid, vitamin B6, vitamin B12, vitamin C, selenium, zinc, and magnesium, may be supportive in conjunction with essential fatty acids supplementation. Inadequate zinc intake may contribute to decreased function of natural killer cells and cell-mediated immune dysfunction. A multi-mineral preparation may ensure the correct balance between zinc and copper. Patient use of herbal/natural remedies should be identified to reveal likely side effects and avoid potential conflicts with prescribed medications. Patients may not know that "natural" does not necessarily mean "better" or "safe." As with medication, small doses should be used initially with warnings about adverse reactions. Some herbs with pharmacological effects have been traditionally incorporated in the diet, e.g., herbal teas of peppermint, ginger or chamomile for gastrointestinal symptoms or for improving sleep. Warnings are appropriate for several largely unregulated products. Glyco-nutrients, olive leaf and pycnogenol (pine bark) have been touted as potential cures for ME/CFS, but neither clinical observation nor published evidence supports their use. Products claiming to be immune system boosters have not been shown, in the medical literature, to reduce symptoms in ME/CFS patients. Many of the so-called adrenal support concoctions contain steroids, which can have adverse effects in those who do not need them, especially when stopped suddenly. Steroids should only be prescribed by a physician. Alternative and Complementary Approaches Patients with ME/CFS often try costly alternative treatments in search of a cure. A review of a number of studies revealed generally poor methodologies and little evidence for more than modest effects. Equivocal evidence was found for homeopathy and biofeedback. Acupuncture, massage and chiropractic are relatively established treatments for pain, and thus are covered in the pain section. More detailed information may be found in recent reviews. Patients with ME/CFS require regular reassessment and follow-up to manage their most disabling symptoms and to reconfirm or change the diagnosis. Although patients may assume that new symptoms are part of ME/CFS, other illnesses with symptoms not characteristic of ME/CFS can develop and should be investigated. Any patient who experiences a worsening of symptoms or the onset of new and/or additional symptoms should be encouraged to return to the physician's office. Additionally, an annual follow-up should be undertaken that includes a review of symptoms, a physical exam, a functional capacity evaluation, routine screening (Table 1, above), and a review of the patient's management/treatment plan. Related Clinical Concerns Low Functioning Patients: Special Considerations Perhaps one in four patients are so disabled that they are confined to a bed or chair and rarely leave home. These individuals are unable to attend regular office visits. Assessments also reveal greater symptom severity, more comorbidities, limited mental activity, and very low levels of physical activity. A small minority of these patients may be totally bedbound and report constant pain as well as an inability to tolerate movement, light or noise and certain scents or chemicals (including prescribed drugs). Home-based caregivers are essential to support patients with severe ME/CFS, and to participate in their ongoing management plan. Caregivers can also be subject to considerable stress in serving the needs of the patient. These suggestions may be helpful for this severely ill group: - Recommend a very quiet environment. - Limit mental activity (such as reading, writing, computing, or concentrating) because mental exertion is as exhausting as physical activity in many of these patients. - Minimize medications and supplements to those absolutely necessary. - Prescribe medication in very low doses and titrate slowly, as tolerated. - Proceed very slowly with any activity, perhaps starting with range of motion exercises lying down, followed by range of motion with light resistance and then very light aerobic activity. In addition, low functioning patients may require more services and support with respect to: - Follow up (perhaps via home visits, telephone contacts, or online communication) - Social support, including home health services and aides - Stress management and grief/loss counseling - Modest expectations for themselves and from others - Balanced nutrition and healthy foods (provided and prepared by caretakers) Most mothers with ME/CFS have an uneventful pregnancy and deliver a normal child. During pregnancy, ME/CFS symptoms may improve for some, remain the same for some, and worsen for others. In many patients, symptoms return to pre-pregnancy levels within weeks of delivery. Pregnancy is not recommended in the early stages of ME/CFS, because the patient may be very ill and the diagnosis uncertain. Some medications for ME/CFS can damage a growing fetus especially in the early stages of pregnancy. The effects of most herbal preparations on the fetus are unknown. Healthcare providers should advise which ongoing medications, given their risks to the fetus, should be stopped before a planned pregnancy. The patient can then determine if she can cope with possibly worsened ME/CFS symptoms without the medications. Some essential medications may need to be continued in smaller doses. Obstetric problems, which may be more prevalent in women with ME/CFS, include lowered fertility, miscarriage, severe vomiting in pregnancy, exhaustion in labor, delayed post-partum recovery and post-partum depression. If labor is prolonged, surgical delivery of the child is recommended. Lactation is not contraindicated. The advantages and disadvantages of breast-feeding should be discussed with the mother. Milk can be expressed for night feedings, to allow the mother adequate rest. Child-rearing is the biggest challenge for mothers with ME/CFS and many require a good support network. The offspring of mothers with ME/CFS may have a higher risk of developing ME/CFS than the general population. One study showed a 5% risk of developing ME/CFS in childhood or early adult life. Another small study suggests that the offspring also may have an increased risk of developmental delays and learning difficulties. ME/CFS and some common gynecological conditions such as pre-menstrual syndrome and menopause show a significant overlap of symptoms. These conditions also frequently exacerbate symptoms of ME/CFS and vice versa. A small number of scientific studies suggest that several gynecological conditions occur more frequently in women with ME/CFS. Some of these conditions may pre-date the onset of the illness. These disorders include: premenstrual syndrome; anovulatory and oligo-ovulatory cycles; low estrogen levels leading to a multitude of central nervous system (CNS) symptoms, loss of libido, and in later years, osteoporosis; dysmenorrhea; pelvic pain; endometriosis; interstitial cystitis; dyspareunia and vulvodynia; and a history of hysterectomy (for fibroids or ovarian cysts). The investigation and treatment of these conditions should follow standard gynecological practice. Many peri-menopausal and postmenopausal patients with ME/CFS may benefit from hormone replacement therapy (HRT). Pre-menopausal patients with ME/CFS and low estrogen levels may also be helped by HRT. Estrogen may improve cerebral circulation, benefit cognition, and provide significant relief from hot flashes, insomnia, and fatigue. HRT also reduces the risk of osteoporosis. Some women may be more responsive to a progesterone-only regimen such as a progesterone-only pill, or impregnated intra-uterine device. These approaches also address contraception, which may be vital for women with ME/CFS. Oral contraceptives may help patients who suffer from menstrual pain, particularly if bleeding is heavy. Hormonal therapy should be limited in duration due to the increased risk of breast, ovarian and uterine cancer with HRT. Some women prefer to take "natural" hormones (e.g., phytoestrogens and wild yam products), but it should be pointed out that prospective randomized studies of their clinical effects and potential side effects have not been done. ME/CFS can occur at any age but it is difficult to diagnose under the age of ten. Pediatric management can be especially challenging. Children and adolescents sometimes do not report symptoms and assume tiredness is normal. In addition, they are often misdiagnosed as lazy or having behavioral disorders, school phobia, attention deficit hyperactivity disorder (ADHD) or factitious disorder by proxy. The diagnosis of ME/CFS is often overlooked or delayed, but it can be established using a specific pediatric case definition (Appendix B in the original guideline document), which is based on the Canadian case definition. The diagnosis in children can be made after 3 months of illness. The prevalence of ME/CFS in children and adolescents varies greatly in different studies, but, overall, rates appear to be lower than in adults. Management and treatment of children with ME/CFS is similar to that described above for adults. Any medications should be prescribed with great caution. As with adults, many pediatric patients with ME/CFS respond to much lower than standard doses of medications. Many children with ME/CFS experience worsening of their school performance. In the USA, children and adolescents with cognitive deficits and physical limitations may qualify for special services under the Individuals with Disabilities Education Act (IDEA), because they are "health impaired." With physician documentation, eligible students can receive an individualized educational plan (IEP). Tutoring at home, correspondence schooling or home schooling allows students who are debilitated with ME/CFS to pace themselves and reduce symptom flares. When appropriate, a graduated schedule of return to school can be successful in conjunction with school personnel who are willing to work with the child and family. This might involve the child initially attending a single class on a daily basis and gradually increasing the number of classes attended over several weeks or months. To enhance the chances of recovery, competitive sports are best avoided. If the patient is subject to stress-related symptom flare-ups, it may be desirable to limit academic examinations to those that are deemed essential. Family counseling may be recommended if family conflicts related to the child's illness are evident. The prognosis for children with ME/CFS is considerably better than for adults, although they may initially be severely ill. Patients with ME/CFS should consider avoiding all but essential immunizations particularly with live vaccines, as post-vaccination relapse has been known to occur. Usual medical practice is not to vaccinate a normally healthy person when unwell. However, during a flu epidemic, patients should balance the health hazards of becoming ill against the possibility of symptom-worsening due to immunization. Blood and Tissue Donation The American Red Cross requires that blood donors "be healthy," i.e., feel well and be able to perform normal activities. Since people with ME/CFS are not healthy by this definition, they should not donate blood. Furthermore, based on the possible link between ME/CFS and xenotropic murine leukemia virus-related virus (XMRV), a number of national blood banks adopted measures to discourage or prohibit individuals diagnosed with ME/CFS from donating blood. Recommendations Prior to Surgery For individuals with ME/CFS approaching surgery, discussion with the surgeon and anesthesiologist/anaesthetist about this illness is important. Issues such as depleted blood volume, orthostatic intolerance, pain control, and sensitivity to anesthetic medications should be addressed. Further recommendations for persons with ME/CFS who are anticipating surgery are given in Appendix E in the original guideline document.
<urn:uuid:31c4c01a-0bc1-46fa-80fa-e0996bf9f96c>
seed
Background: Dementia is a large and growing problem but is often not diagnosed in its earlier stages. Screening and earlier treatment could reduce the burden of suffering of this syndrome. Purpose: To review the evidence of benefits and harms of screening for and earlier treatment of dementia. Data Sources: MEDLINE, PsycINFO, EMBASE, the Cochrane Library, experts, and bibliographies of reviews. Study Selection: The authors developed eight key questions representing a logical chain between screening and improved health outcomes, along with eligibility criteria for admissible evidence for each question. Admissible evidence was obtained by searching the data sources. Data Extraction: Two reviewers abstracted relevant information using standardized abstraction forms and graded article quality according to U.S. Preventive Services Task Force criteria. Data Synthesis: No randomized, controlled trial of screening for dementia has been completed. Brief screening tools can detect some persons with early dementia (positive predictive value 50%). Six to 12 months of treatment with cholinesterase inhibitors modestly slows the decline of cognitive and global clinical change scores in some patients with mild to moderate Alzheimer disease. Function is minimally affected, and fewer than 20% of patients stop taking cholinesterase inhibitors because of side effects. Only limited evidence indicates that any other pharmacologic or nonpharmacologic intervention slows decline in persons with early dementia. Although intensive multicomponent caregiver interventions may delay nursing home placement of patients who have caregivers, the relevance of this finding for persons who do not yet have caregivers is uncertain. Other potential benefits and harms of screening have not been studied. Conclusions: Screening tests can detect undiagnosed dementia. In persons with mild to moderate clinically detected Alzheimer disease, cholinesterase inhibitors are somewhat effective in slowing cognitive decline. The effect of cholinesterase inhibitors or other treatments on persons with dementia detected by screening is uncertain.