content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
The name Indigenous Design Studio reflects the four core values of the firm. These values include; quality, cost effectiveness, sustainability, and commitment to Native American communities. Indigenous Design Studio was created to provide unique, sustainable and innovative designs for all Native American Tribes throughout the country, where they can be stimulated by the experience of the designs that encompass their natural surroundings while preserving their history and culture. Architecture has been a journey in all parts of my life; it has and has brought me to where I am today. I believe a designer should not just be passionate about their designs, but also be the stepping stones to meet the needs of the people through their history, culture and environment. My passion to provide great service to Native American communities throughout the country has stimulated me to create a business that supports the needs of the people. Tamarah R. Begay, Associate AIA, AICAE, LEED AP BD+C Tamarah is a member of the Navajo Nation, and was raised near Gallup, New Mexico. She received her Master’s Degree in Architecture from the University of New Mexico in December of 2004. Upon graduation she also completed the National Council of Architectural Registration Board (NCARB) requirements for the Intern Development Program (IDP) in order to begin her seven part Architectural Registration Exams (ARE). Tamarah has over 10 years of experience working with Native American Tribes on schools, housing, office buildings, cultural centers and multi-purpose buildings. Her experience includes project management, marketing, proposal writing, specification writing, programming, master planning, design (all phases), interior design, construction documents and construction administration. Tamarah is one of the founding members of the UNM Student Chapter of the American Indian Council of Architects and Engineers (SC-AICAE). As a professional member of the AICAE, she also is a board member. Tamarah is also an associate member of the American Institute of Architects (AIA), a certified construction document technician (CDT), and a LEED Accredited Professional. Education: Masters of Architecture, University of New Mexico, Albuquerque, New Mexico 2004 Navajo clans: Tamarah Begay yinishe’ Naakaii Dine’ nish li (Mexican clan) Kinyaa a’ anii basischi (Towering House clan) A’shiini’ da shi cheí (Salt People clan) Hona’gha’ahnii da shi nali (One Walks Around clan) Organizations and Affiliations: American Indian Council of Architects and Engineers (AICAE), LEED Accredited Professional, Associate Member of American Institute of Architects (AIA), Certified Construction Document Technician (CDT), USGBC-New Mexico Member, AICAE - Board Member, 2012 AIA Diversity Council Recent projects include:
http://www.swbdc.com/about-us/indigenous
We have tired to make our fees as fair and as straight-forward as possible. If you have any questions please get in touch. Stage 1: This is the first meeting where we will discuss your plans and objectives. We will discuss different aspects of your goals, whether it be retirement, protection, inheritance planning or investments. We will also use this time to explore any questions you may have about us and our advice process. We will cover the cost of this meeting. Stage 2: Creating a financial plan. We will take the information from stage 1, along with some base assumptions, and look into the future to see if your goals are likely to be met. We will then sit down with you and go through the report and any action points that may arise. This can often provoke a conversation about what changes can be made and what your priorities are. The cost of this stage is £750 but this can be reduced to £195 if you are willing to complete some forms yourself. Stage 3: If any of the action points lead to us providing specific advice there is an advice and implementation fee which is dependent on the complexity of the advice: Simple: This would typically involve setting up a single investment or pension policy with either a lump sum or regular monthly investment. £1,000 Intermediate: Typically, this would involve moving an existing investment/pension to an alternative plan. You may be approaching retirement and want advice on how to take any pension income. £1,500 (plus 1% of any plans that need transferring) Complex: You may have multiple investments and pension plans that need to be reviewed, tax implications or lifetime allowance or inheritance issues. £2,500 (plus 1% of any plans that need transferring) Stage 4: This is our review service. We will review your circumstances, plans and goals every year to make sure we are still on track and make any necessary changes. We will also update you with valuations, market updates, investment reviews and check your risk approach remains appropriate. The cost of this service is tiered based on the value of your plans:
https://www.dixonfinancial.co.uk/our-fees/
A rich theme running through Mary Shelley’s Frankenstein is responsibility. In a straightforward—even didactic—way, the novel chronicles the devastating consequences for an inventor and those he loves of his utter failure to anticipate the harm that can result from raw, unchecked scientific curiosity. The novel not only explores the responsibility that Victor Frankenstein has for the destruction caused by his creation but also examines the responsibility he owes to him. The creature is a new being, with emotions and desires and dreams that he quickly learns cannot be satisfied by humans, who are repulsed by his appearance and terrified of his brute strength. So the creature comes to Victor, pleading—and then demanding—that he create a female companion with whom he can experience peace and love. While Victor is grappling intellectually and practically with the implications of being responsible both for and to the creature, he is also experiencing responsibility as a devastating physical and emotional state. In this way, Mary Shelley raises a third aspect of responsibility—its impact on the self. The word responsibility is a noun defined as either a duty to take care of something or someone or the state of being the cause of an outcome. The word is familiar to everyone. Indeed, we order our daily lives based on our ideas about responsibility, whether we are referring to the duties we have to care for others—for instance, children—or our understandings about who or what has caused there to be food on our plates or a drought in California. The concept is especially important to students of philosophy and law. In philosophy, special attention is paid to the concept of “moral responsibility,” which refers not to a cause-and-effect relationship nor to the duties that come with occupying particular roles in society but to the determination that someone deserves praise or blame for an outcome or state of affairs. Humans’ ability to be held morally responsible is closely tied up with the ideas about the nature of persons—specifically that persons have the capacity to be morally responsible agents. In Frankenstein, Mary raises questions about who is and is not capable of moral responsibility. At the beginning of the book, she introduces a protagonist who appears capable of being held morally responsible for his actions and an antagonist (the creature) who does not. But as the story develops, she raises questions about which of the two is the truly rational actor—Victor, who is addled by ambition, fever, and guilt, or the creature, who acquires emotion, language, and an intellect. In law, responsibility is generally attributed in a two-step process. Judges and juries are first asked to determine whether the person caused the outcome in question—Did the accused pull the trigger on the gun that fired the bullet that killed the victim? They must then decide whether the person did so with the requisite intent, called mens rea. A killer who intended to kill the victim could be guilty of first-degree murder, but the legal responsibility assigned to someone who shot the victim accidentally might be manslaughter or another less-serious offense. A number of factors can interfere with legal responsibility, such as age (children are generally excused), compulsion (if someone has a gun to your head, you might not be held responsible for the actions they instruct you to perform), and mental defect (e.g., insanity). As with the determination of moral responsibility in a court of law, an attempt to attribute legal responsibility in Frankenstein quickly becomes complex. Although it might initially seem that Victor should be the one held legally responsible not just for the existence of the creature but for the havoc he wreaks, we also must consider that the creature quickly develops the capacity for rational thought, raising the possibility that he may qualify as an actor capable of both causing harm and forming the intention to do so. Given the sophisticated way the creature develops, by the end of the book he alone might be held legally responsible for the deaths he causes. Victor experiences the two basic meanings of the word responsibility. He creates the creature (he causes it to exist), and therefore he has at least some responsibility for what the creature goes on to do. As the creature’s maker, Victor also has both a duty to others to keep them safe from his creation and, Mary seems to be saying, a duty to his creation to ensure that his existence is worthwhile. We will turn to these two ideas now—responsibility for and responsibility to. In a very straightforward way, Victor causes the monster to exist. He builds him, freely and with the hope, indeed the intention, that he will come to life. This creation is no accident. Although many factors can arguably interfere with attributions of responsibility—including compulsion and delusion—there is no suggestion that Victor does not intend to make the creature, despite the frenzied way he goes about it. Indeed, Victor anticipates his future responsibility for the existence of the creature with pleasure and excitement—even triumph: “A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as I should deserve their’s” (p. 37). Victor’s error is failing to think harder about the potential repercussions of his work. Although he says that he hesitated for a long time about how to use the “astonishing” (p. 35) power to “bestow animation upon lifeless matter” (p. 37), this hesitation is due to the many technical hurdles that he needs to overcome rather than to any concern for the questionable results of success. He considers the good that might come from his discovery—it might lead to development of a method for bringing the dead back to life—but he fails to consider the future of his initial experimental creation. Although he is aware that the single-minded pursuit of his scientific goals is throwing his life out of balance, he utterly fails to consider the possibility that the form he has stitched together and will soon animate may go on to cause harm to anyone, including Victor himself. We might compare Victor to some modern scientists who have stopped their work to consider its potential for harm, such as those who gathered at Asilomar in the mid-1970s to consider the implications of research on recombinant DNA or those who recently called for a moratorium on germline gene editing. Victor’s failure to thoroughly anticipate responsibility—to consider that there might be both upsides and downsides to his technical achievement—is his downfall. As soon as the creature opens his “dull yellow eye” (p. 41), Victor is filled with “breathless horror and disgust” (p. 42). He flees, initially so agitated he is unable to stand still, eventually falling into a nightmare-filled sleep in which he sees his fiancée, Elizabeth, first “in the bloom of health” (p. 43) and then as a rotting corpse. Victor is woken by the creature but “escape[s]” again (p. 43). He is unable to face his creation and is unprepared for the creature’s independent existence. As the story progresses, Victor’s initial emotional reactions to seeing the creature come to life—disgust and horror—are substantiated by the creature’s actions. Victor learns that the creature has killed his young brother William, whose death is then blamed on a family friend, Justine. But Victor knows the truth. He understands that he would be implicated in her execution if she is convicted as well as in the murder of his brother—“the result of my curiosity and lawless devices would cause the death of two of my fellow-beings” (p. 62). He suffers greatly under this guilt—“the tortures of the accused did not equal mine; she was sustained by innocence, but the fangs of remorse tore my bosom, and would not forego their hold” (p. 65). But he does nothing to intervene. The girl is unjustly convicted. “I, not in deed, but in effect, was the true murderer” (p. 75). Victor continues to hold himself responsible for both the existence of the horrifying creature and the creature’s deadly deeds. He spends his remaining days on earth chasing the creature across the Arctic, intending to kill him. But in this understanding of his responsibility, he is alone—no one else in the novel sees Victor as anything but a casualty of unspeakable misfortune. Although he is at one time accused of murdering his friend Henry Clerval—who is killed by the creature—that charge is eventually dropped (ironically, as Victor leaves the prison, an observer remarks, “He may be innocent of the murder, but he has certainly a bad conscience” [p. 153]). Even Robert Walton, the explorer who encounters Victor on the ice and to whom Victor narrates his entire story, judges him to be noble, gentle, and wise. It is left to Victor’s own conscience—and to the reader—to assess the extent to which he should be held responsible for the creature’s deeds. On this question, Victor is resolved. Although he allows that he did not intend to create a creature capable of such evil, he continues to hold himself responsible for the creature’s existence and for the deaths the creature causes, and he dies believing himself duty bound toward his fellow creatures to destroy his creation. On his deathbed, Victor also acknowledges that he is not just responsible for the creature but also responsible to him: “I … was bound towards him, to assure, as far as was in my power, his happiness and well-being” (p. 181). The creature himself makes this argument forcefully when he confronts Victor in the mountains overlooking the Chamonix Valley. The creature relates all that has transpired since Victor abandoned him. He has learned to find food and shelter. By closely observing a human family, he has learned about emotion and relationships as well as how to speak and read. By finding a collection of books, he learns the rudiments of human society and history. Yet on each attempt to engage with humans, the creature is disastrously rejected—sometimes even attacked. He learns that humans are repulsed by him. Concluding that humans will never accept him into their moral community, he comes to see humans as the enemy. He now lays his pain and loneliness at Victor’s feet: “Unfeeling, heartless creator! you had endowed me with perceptions and passions, and then cast me abroad an object for the scorn and horror of mankind. But on you only had I any claim for pity and redress, and from you I determined to seek that justice which I vainly attempted to gain from any other being that wore the human form” (p. 116). To assuage his loneliness, rage, and pain, the creature demands that Victor “create a female for me, with whom I can live in the interchange of those sympathies necessary for my being” (p. 120). The creature tries to reason with Victor: “Oh! my creator, make me happy; let me feel gratitude towards you for one benefit! Let me see that I excite the sympathy of some existing thing; do not deny me my request!” (p. 121). Although Victor’s sympathies are stirred by the creature’s story and his plea for companionship, Victor immediately refuses out of a sense of responsibility to protect the world from “wickedness” (p. 139). By having her inventor create a sentient being—in particular one whose intellect and emotions rival or surpass those of her supposed protagonist—Mary sharpens the point about the responsibility that we might owe to our creations. Parents understand this point (and in many ways Victor is placed in the role of a parent—albeit one who rejects and abandons his child). And so must scientists working to create new or modified life-forms carry a responsibility to their creations. We can take the point even further: a sense of responsibility can be experienced by anyone who pours time and energy into a project, even if that project does not result in a new life form. We can legitimately speak about feeling an obligation to our work—including to our results, our ideas, or our findings—that it deserves to be published or further developed or recognized as valuable not only because it can benefit others or result in glory for ourselves but because of the intrinsic value of new knowledge. One of the most striking aspects of Mary’s treatment of responsibility is her depiction of its emotional and physical toll. Before Victor gains any insight into the deadly consequences of his scientific work or the onerous duties he has thereby acquired, he experiences responsibility as an emotional and physical state. At the very moment he animates his creation, “the beauty of the dream vanished, and breathless horror and disgust filled my heart” (p. 42). He runs from the room, paces back and forth, “unable to compose my mind to sleep” (p. 42), falls into a sleep filled with nightmares portending the death of his fiancée, and wakes in a cold sweat with his limbs convulsing. He goes outside and by chance meets his friend Henry Clerval, who notices his agitated mood and then spends several months nursing Victor through a “nervous fever” during which “the form of the monster on whom I had bestowed existence was for ever before my eyes, and I raved incessantly concerning him” (p. 46). Victor recovers from this first episode, but his recovery is short-lived. As the creature kills his family and friends, Victor grapples with the realization that he is responsible for the existence of the creature and to a certain extent is therefore responsible for the creature’s deeds. His grief at the death of little William and then of Henry are compounded and tainted by his guilt at the role he has played in their deaths. He cannot sleep, and his physical health declines. His concerned father implores him to move beyond his grief and reenter the world, “for excessive sorrow prevents improvement or enjoyment, or even the discharge of daily usefulness, without which no man is fit for society.” But Victor is unable to respond: “I should have been the first to hide my grief, and console my friends, if remorse had not mingled its bitterness with my other sensations” (p. 72). As the story progresses, Victor continues to suffer emotionally and physically. His family and friends are alarmed and try to help him, but Victor cannot be reached. He withdraws from their company, floating aimlessly on a boat on the lake, unable to find peace. He hikes in the mountains during a rainstorm. He travels to England, ostensibly to see the world before settling down in marriage but in reality to build another creature. He describes the time as “two years of exile” (p. 130), and he bemoans his inability to enjoy the journey or the people he meets on his way. He describes a visit to Oxford, noting that he “enjoyed this scene; and yet my enjoyment was embittered both by the memory of the past, and the anticipation of the future. … I am a blasted tree; the bolt has entered my soul; and I felt then that I should survive to exhibit, what I shall soon cease to be—a miserable spectacle of wrecked humanity, pitiable to others, and abhorrent to myself” (p. 135). As the book concludes, Victor lay dying in Walton’s boat. The explorer and the reader are left in no doubt about what has killed him. When the creature boards the boat and sees the newly dead Victor, he claims responsibility for his death—“That is also my victim!” the creature exclaims. “I, who irretrievably destroyed thee by destroying all thou lovedst” (p. 183). Yet it is not only the loss of his family and friends that destroys Victor but also the guilt and remorse that came with being the one who so naively created the creature and gave him life. In Frankenstein, Mary Shelley explores at least three aspects of responsibility: Victor’s responsibility for the deadly actions committed by his creation and the threat the creature’s existence poses to his family, friends, and, Victor fears, the entire world; Victor’s responsibility to his creation for the creature’s welfare and well-being; and the consequences of this weighty responsibility for Victor both physically and emotionally. The novel is a gothic horror—the plot is fantastical, the scenery dramatic, and the hero doomed. But it is also a cautionary tale, with a serious message about scientists’ and engineers’ social responsibility. Mary conveys a concern that unchecked scientific enthusiasm can cause unanticipated harm. For Victor, scientific curiosity threatens the integrity of his family and disrupts his ability to engage with nature and enter into relationships. By supplying a protagonist who suffers so greatly as a result of failing to anticipate the consequences of his work, Mary urges upon her readers the virtues of humility and restraint. In her development of a creature who suffers so greatly because he is despised and rejected by an intolerant human society, she asks us to consider our obligations to our creations before we bring them into being. The reader is left to wonder whether the story could have unfolded differently if Victor were to have behaved more responsibly. Might he have anticipated the brute strength of his creation and decided not to create it, or might he have altered his plan so that the creature would be less powerful and less terrifying? Rather than abandoning the creature, might he have stepped into his parental role and worked to ensure the creature’s happy existence? Mary does not tell us what Victor should have done differently—that is the reflective work that we readers must do as we consider our own responsibility to and for our modern-day creations. The novel portrays an extreme case of scientific responsibility, but all of us are implicated in situations where we are responsible to moral standards, to particular ideas, and to other people. What kinds of responsibility do you have as a scientist, a citizen, a creator, a human being? How do you define these responsibilities? And what does it mean to “feel” them? Johnston argues that Victor experiences two forms of responsibility: responsibility for and responsibility to. Are there other kinds of responsibility, in particular forms of shared or collective responsibility? The novel portrays an extreme case of scientific responsibility, but all of us are implicated in situations where we are responsible to moral standards, to particular ideas, and to other people. What kinds of responsibility do you have as a scientist, a citizen, a creator, a human being? How do you define these responsibilities? And what does it mean to “feel” them? Johnston argues that Victor experiences two forms of responsibility: responsibility for and responsibility to. Are there other kinds of responsibility, in particular forms of shared or collective responsibility?
https://www.frankenbook.org/pub/traumatic-responsibility/release/3
Growing corn (Zea mays) can be a money-saving, rewarding experience. This warm-season vegetable can thrive in gardens within U.S. Department of Agriculture plant hardiness zones 8 through 10 where it's ideally started during the months of April through July, when the soil has reached a temperature of at least 50 degrees Fahrenheit. Neglecting to prep your soil before starting your seeds can affect growth and result in a disappointing harvest. To avoid this, provide your corn with well-drained, rich soil, and about 65 to 95 days after planting, you'll be rewarded with a plentiful harvest. 1 Perform a soil test to determine the pH of the soil in your garden. Test the soil about one year before planting, because soil amendments take time to blend with the existing soil. A pH range of 6.0 to 6.5 is ideal for growing corn. Amend the soil based on the test results. To raise the pH, work limestone into the top 7 inches of soil, and to lower it, till sulfur into the soil. To determine the exact amendment amount, use the information provided with the soil pH test kit or consult the Cooperative Extension Service that serves your area. 2 Cultivate the soil in a sunny part of the garden. Use a spade for a smaller garden and a rototiller for a large plot. Loosen the soil to a depth of 6 inches, pulverize clumps and remove weeds and rocks during the process. 3 Incorporate a 2- to 4-inch layer of compost in the soil to improve soil moisture retention and nutrients. 4 Work fertilizer into the soil according to the result of your pH test. Alternatively, incorporate a 12-12-12 fertilizer at the rate of 4 pounds per 100 square feet. Late in the growing season, when the plants are about 2 feet tall, side-dress with a high-nitrogen fertilizer. 5 Rake the soil to level it before sowing your seeds.
https://homeguides.sfgate.com/prep-soil-planting-corn-69986.html
On August 20th the town of North Hampton, NH is celebrating the anniversary of the poet's birth with two special events. The Pontine Theatre group will stage a production of "Home is Heaven: POEMS BY OGDEN NASH" at 7 p.m. at North Hampton School, 201 Atlantic Ave. Pontine's co-artistic directors Marguerite Mathews and Greg Gathers will bring Nash's poems on family and summer to life with toy theater figures and puppets. Prior to the production here will be a tour of Ogden Nash's former seaside home from 4:30 to 6:30 p.m., compliments of Bob and Sherry Lauter, the current owners. For more information contact the North Hampton Public Library at 603-964-6326. Nash was born on August 19, 1902.
http://blog.ogdennash.org/2009/08/ogden-nashs-108th-birthday-party.html
The invention relates to a method and a device for the path-dependent control of the force generated by a first piston, which piston travels in a first cylinder which is divided by the first piston into a first and a second chamber, a pressure fluid being introduced into the first chamber. A second cylinder is also provided for in which a second piston travels and is likewise divided by the second piston into a first and a second chamber. A similarly designed system is known from DE-A-44 41 098 and functions as an actuation device boosted by external force. The fluid pressure is generated by a master cylinder and transmitted to the first chamber of a slave cylinder and the first chamber of the cylinder of a transmission unit. The system works hydraulically and the slave cylinder and the master cylinder of the transmission unit are stepped, the second chambers each having the greater diameter. As long as the external force exists, the transmission unit is held at rest position because the pressure generated by the master cylinder is not sufficient for moving the piston of the transmission unit against the pressure of the external force supply acting in the second chamber. When the external force supply is lacking, that counter force is absent and the piston is moved, whereby the second chamber is reduced. The pressure built up at the feed point of the external force is transmitted to the second chamber of the slave cylinder and thereby boosts it. The objective of the invention is to control the force generated by a piston in the simplest possible manner with dependence on the path of travel of the piston. This objective is achieved according to the invention by the two pistons being coupled so that the second piston is pulled along by the first piston, and by the second chamber of the first cylinder being connected to the first chamber of the second cylinder so that the same pressure prevails in both chambers. When the first piston performs an extension movement, the fluid is conducted out of the second chamber of the first cylinder into the first chamber of the second cylinder. As the two pistons are coupled, they move synchronously so that in the event of an extension movement, the reduction of the second chamber of the first cylinder is accompanied by an enlargement of the first chamber of the second cylinder. If the diameter of the second cylinder is made greater than that of the first cylinder, the overall volume of the second chamber of the first cylinder and the first chamber of the second cylinder increases when the first piston performs an extension movement, so that the counterpressure falls and the force generated by the first cylinder increases continuously in a path-dependent manner when the first piston extends. If, however, the second cylinder has a smaller diameter, the force generated by the first cylinder falls continuously when there is an extension movement. In each case, the force generated by the first piston is a largely steady and linear function of its extension path. The pressure in the second chamber of the second cylinder is preferably regulated so that it is always equal to the counterpressure, that is, the pressure in the first chamber of the second cylinder and in the second chamber of the first cylinder. The two cylinder-piston units can be standard cylinders including a piston and the two chambers of each cylinder can have the same diameter. The coupling of the two pistons is appropriately mechanical and positive. The initial value of the counterpressure of the first cylinder is preferably adjustable, to which end its second chamber can be connected to the source for pressure fluid, via a regulator at which the initial counterpressure value can be set. The pressure fluid is preferably a compressed gas, especially compressed air. The source of the pressure fluid the pressure gas or the pressurized air is the sole power source of the system. The invention can be used, for example, for a device for dispensing viscous compositions contained in aluminium cartridges having a corrugated surface. The viscous composition may, for example, be an adhesive. The cartridge is compressed for dispensing the compositions. The force required to do this increases according to the degree to which the cartridge is already compressed. Therefore, to dispense the viscous composition at a constant rate and volume, it is necessary to control the force exerted on the cartridge, and that being to have it increased, with dependence on the remaining size of the cartridge. To do this, the cartridge can be inserted into a dispensing device in which it can be acted upon by the first piston of the device according to the invention.
The infamous 'man-eating lions of Tsavo' – which killed at least 35 people in Kenya about a century ago – chose to prey on humans because a severe tooth disease made hunting down other animals difficult. The study debunks the theory which suggests that prey shortages may have driven the lions to man-eating. At the time, the Tsavo region was in the midst of a two- year drought and a rinderpest epidemic that had ravaged the local wildlife. Researchers, including those from Vanderbilt University in the US, carried out an analysis of the microscopic wear on the teeth of the 'man-eating lions of Tsavo'. They employed state-of-the-art dental microwear analysis on the teeth of three man-eating lions from the Field Museum's collection: the two Tsavo lions and a lion from Mfuwe, Zambia which consumed at least six people in 1991. If the lions were desperate for food and scavenging carcasses, the man-eating lions should have dental microwear similar to hyenas, which routinely chew and digest the bones of their prey, researchers said. “Despite contemporary reports of the sound of the lion's crunching on the bones of their victims at the edge of the camp, the Tsavo lion's teeth do not show wear patterns consistent with eating bones,” said Larisa DeSantis, assistant professor at Vanderbilt University. “In fact, the wear patterns on their teeth are strikingly similar to those of zoo lions that are typically provisioned with soft foods like beef and horse meat,” said DeSantis. The study provides new support for the proposition that dental disease and injury may play a determining role in turning individual lions into habitual man eaters. The Tsavo lion which did the most man-eating, as established through chemical analysis of the lions' bones and fur in a previous study, had severe dental disease, researchers said. It had a root-tip abscess in one of its canines – a painful infection at the root of the tooth that would have made normal hunting impossible. “Lions normally use their jaws to grab prey like zebras and buffaloes and suffocate them,” said Bruce Patterson from The Field Museum of Natural History in the US. “This lion would have been challenged to subdue and kill large struggling prey, humans are so much easier to catch,” Patterson said. The diseased lion's partner, on the other hand, had less pronounced injuries to its teeth and jaw – injuries that are fairly common in lions which are not man eaters. According to the same chemical analysis, it consumed a lot more zebras and buffaloes, and far fewer people, than its hunting companion, researchers said. “Our results suggest that preying on people was not the lions' last resort, rather, it was simply the easiest solution to a problem that they confronted,” said DeSantis.
https://www.thestatesman.com/technology/science/tooth-decay-made-tsavo-lions-switch-to-soft-human-meat-1492601580.html
As you read through the Privacy and Security Rules for HIPAA, you’ll see a pattern that shouldn’t be taken for granted. Nearly all the implementation specifications require some form of policy and procedure documentation. This involves more than the reasoning and justification for how you choose to implement the specifications (though that must be documented as well). These are the policies and procedures that HIPAA expects your business to follow every day. Organizational Standards Besides the administrative, physical, and technical safeguards which make up the majority of the Security Rule, there is a lesser known section of safeguards called organizational standards that deal largely with the paperwork required by HIPAA concerning protected health information (PHI) in any form. This section is often overlooked because many of its requirements are addressed in greater detail throughout the Privacy and Security Rules. The four standards in this section include: - Business Associate Contracts - Requirements for Group Health Plans - Policies and Procedures - Documentation This article focuses on the last two standards: Policies and Procedures and Documentation, both of which lay the groundwork for HIPAA compliance. The other two standards shouldn’t be ignored, but they concern only those who: a) are or need a business associate or, b) are a sponsor to a group health plan that provides data beyond enrollment and summary information. Note: If you work with or are a business associate that works with ePHI and your contract has not been updated since the HITECH Act in 2009 or the Final Omnibus HIPAA Rule in 2013, you will want to review and update all contracts to ensure they meet the current standards regarding business associates. Standard 164.316(a): Policies and Procedures Why have an entire standard dedicated to something addressed in nearly every single implementation standard? This standard explains what HIPAA expects from the policies and procedures that a business creates. Specifically, it references the Security Standards’ General Rule of Flexibility of Approach, which is discussed in Part 2 of this series. It also allows for policies and procedures to be changed at any time to adjust to new demands or technologies, as long as all changes are documented and implemented accordingly. Standard 164.316(b)(1): Documentation This standard identifies how documentation required by HIPAA is to be maintained. According to this standard and its subsequent implementation standards, all documentation required throughout the Security Rule’s standards, including but not limited to - policies and procedures, - job responsibilities and duties, - risk assessments, and - action plans must be recorded (physically or electronically) and retained for a minimum of six years from the date of creation or when it was last in use, whichever date is later. All documentation must be available to anyone who uses those procedures, and documentation should be consistently reviewed and updated as necessary. Note: The six-year retention rule only satisfies HIPAA standards. State law may require some documentation to be retained for longer. Always verify what state laws apply to your business, as HIPAA does not supersede many state requirements. Bringing Your Policies into Compliance It’s possible your business already has clear policies and procedures in place, but that doesn’t immediately make you HIPAA compliant. You still need to go through each one to ensure it satisfies the implementation specifications it pertains to. If not, policies may need to be updated or new ones added. HIPAA gives businesses a great deal of leeway in how policies and procedures are written, so both updating existing documentation and creating all new materials is acceptable. What should the policies and procedures say? HIPAA doesn’t dictate the exact wording of any policy or procedure. It’s up to the business, taking into consideration the Flexibility of Approach guidelines, to determine what policy needs to be implemented. Generally, a policy explains a business’s approach to the subject it relates to. If the policy concerns removing access from those who no longer work for the company, it could read something like: At the end of an employee’s last day of employment with [company name], security and/or IT staff will remove that employee’s access to company systems and restricted locations and document the change of access. The employee’s supervisor will verify that all access has been revoked within twenty-four hours. This offers clear guidance about what the company intends to do to remove access from someone who no longer is allowed to work with PHI. It also provides an implementation timeline, who should implement the policy, and how the company will ensure it gets implemented properly. The procedure that accompanies the policy would then offer easy-to-follow directions on how those responsible are to implement the policy. A sample procedure may look like this: Regarding Policy for Removing Access of Former Employees Duty of IT Staff or Managed Services Provider - Go to [directory] and locate the list of all programs and devices employee had access to according to job title. Check this list against their user account to ensure no programs are missed. - Starting at the top of the list, go through each program and device and remove employee access. For procedures regarding specific programs, see [directory of procedures]. - Go to Active Directory and find employee information. - Backup emails and save them to [directory] to be stored for a period of one year before deletion. - Backup any information relating to patient care in appropriate directories. See [directory list] for proper placement. - Disable user’s Active Directory account and change their password. - Document time, date, and your name in the Employee Termination log to indicate all access it removed. - Inform former employee’s supervisor when access removal is completed for verification. Procedures should be as detailed as possible so that there is no ambiguity or confusion in what needs to be done. It allows newer employees to accomplish tasks they may not have performed before. There may also be multiple procedures related to the same policy depending on the duties of each person. Margret Amatayakul wrote an excellent guide to creating policies and procedures for the Journal of AHIMA (American Health Information Management Association). Note: Both the Security Rule and the Privacy Rule require policies and procedures to be created. A company can combine relevant Security and Privacy standards into a single policy or create entirely separate policies for the Security and Privacy Rules. Each business should determine what is best for its employees. The Importance of Employee Training in HIPAA Once you have your policies and procedures written and accessible, the next vital step is to train employees on them. HIPAA requires all employees to be trained in the policies and procedures related to their job. This training includes everyone from the maintenance staff to the CEO. Each time a policy or procedure is updated, retired, or replaced, the affected staff must be informed and, if needed, new training should occur. Of course, maintenance personnel and CEOs won’t need the same kind of HIPAA training, just as IT support doesn’t need the same training as a nurse. HIPAA doesn’t dictate the way training happens, only that it happens. This means big companies that can afford professional training materials can do so, but smaller companies may hold informational meetings, allowing each to train the way that is most effective and makes the most sense for them. Suggestions For Employee Training - Go through your employees’ job descriptions and separate employees by the level of access they have to PHI. - Create training programs for each level of access and/or the duties required in the job description so each employee gets the training suited to their job. - Don’t overload employees with policies and procedures that don’t relate to their job. - Ensure all training includes how to access the company’s policies and procedures in case employees need to revisit or reference them. - Ensure all employees know who to contact if they have any questions. Sanctions Along with training employees, HIPAA also requires you have clear consequences for not following the written policies and procedures. The types of offenses should be clearly defined and the disciplinary action enacted for every infraction. One way a company might dictate levels of disciplinary action would be to clarify whether a break in policy or HIPAA standard was accidental, made through negligence, or of malicious intent. This allows various consequences for the same infraction without being inconsistent. An example would be: a) an employee leaving a workstation unlocked because an emergency situation demanded they respond immediately, b) they consistently forget to lock their workstation even after being warned about it, or c) they intentionally leave a workstation unlocked to allow someone without access to view ePHI. While the problem is technically the same, they don’t all deserve the same consequences. As with everything else, all infractions and disciplinary actions need to be documented and retained for six years. In 2018, the Health and Human Services Office of Civil Rights reported 279 breaches of PHI, each resulting in at least 500 individuals affected, though often the number was much higher. Policies and procedures may feel tedious to write, but they provide employees with the information necessary to do their job in a HIPAA compliant manner and could prevent a breach of PHI. For help with developing clear and secure policies for your company’s software and devices, contact Anderson Technologies at 314.394.3001 or by email at [email protected].
https://andersontech.com/hipaa-documentation/
Seoul Guidelines on the Cooperation of NHRIs for the Promotion and Protection of Human Rights of Migrants in Asia, Seoul, Korea (2008). International Conference on Human Rights of Migrants and Multicultural SocietyDignity and Justice for All Migrants Seoul, Korea10–12 November 2008 Seoul Guidelines on the Cooperation of NHRIs for the Promotion and Protection ofHuman Rights of Migrants in Asia Preamble The International Conference on Human Rights of Migrants and Multicultural Society—Dignity and Justice for All Migrants held in Seoul, Korea on 10-12 November 2008, Reaffirming the Universal Declaration of Human Rights which proclaims that all human beings are born free and equal in dignity and rights and that everyone is entitled to all the rights and freedoms set out therein, without distinction of any kind, in particular as to race, colour or national origin, Recallingthe universal instruments agreed upon by States to safeguard human rights and fundamental freedoms, including the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), the International Covenant on Economic, Social, and Cultural Rights (ICESCR), the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), the Convention on the Rights of the Child (CRC), the Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), the International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families (ICRMW), the International Convention on Rights of Persons with Disabilities (ICRPD), relevant International Labour Organization conventions, and regional instruments, Welcoming the entry into force of the International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families (1 July 2003), reaffirming its importance as a baseline for migrant workers’ rights, and recognizing the important work of the Committee on Migrant Workers, Welcoming the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions, Recognizing that these instruments establish a framework for the protection of the rights and fundamental freedoms of all human beings, Recognizing the important role played by the human rights organs of the United Nations, including the guidance and jurisprudence of the treaty bodies, the Human Rights Council, and special procedures including, notably, the Special Rapporteur for the promotion and protection of the human rights of migrants and his visits to Asian countries such as Indonesia, South Korea, and the Philippines, Reaffirming the Durban Declaration and Programme of Action (DDPA), adopted at the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance (WCAR) in Durban, South Africa, September 2001, as a landmark document in global efforts to eradicate racism, racial discrimination, xenophobia and related intolerance, Welcoming the convening of the Durban Review Conference (DRC) which is to take place in Geneva on 20-24 April 2009, and the establishment of the International Coordinating Committee (ICC) of National Institutions for the Promotion and Protection of Human Rights Working Group on the DRC at the 9th International Conference of National Institutions (ICNI) in Nairobi, Kenya, October 2008, Recognizing the importance of a human rights-based approach to migration, as well as the full participation of NHRIs, in the Global Forum on Migration and Development process, Welcoming the timely adoption on 5 November 2008 of General Recommendation No. 26 of the Committee on the Elimination of Discrimination Against Women (CEDAW) on Women Migrant Workers who may be at risk of abuse and discrimination, Noting that migration can be a positive social force as migrants make valuable contributions to economic growth and development in both home and host countries, including poverty reduction, and as migrants contribute to the vitality of a diverse society and to more enlightened relations among peoples, Noting also that the situation of migrant workers and their families has become a critical contemporary human rights issue worldwide, particularly in relation to exploitation by traffickers, people smugglers, recruitment agents, and corrupt officials; deaths and injury in transit; discrimination and xenophobia; various forms of exploitation including sexual abuse; subjection to forced labour, slavery, practices akin to slavery; and intolerable working conditions; and inhumane treatment in cases of arrest, detention and deportation, Recognizing the unique role played by NHRIs in applying international human rights standards at the national level, thereby ensuring their independence and effectiveness in accordance with the Paris Principles, which enables them to contribute to the promotion and protection of migrant rights through dialogue between public authorities and civil society groups at the national level, Urging therefore the continued enhancement of the role and participation of NHRIs in international human rights mechanisms, such as the Human Rights Council (Universal Periodic Review and Special Procedures) and Human Rights Treaty Bodies, as well as in regional human rights initiatives, Reaffirming that NHRIs in the Asia-Pacific region should continuously play an active role in protecting and promoting human rights in the region, with special efforts to advocate for a human rights approach to migration and migration management, and to promote the establishment of NHRIs in countries where they are not yet established, Welcoming the efforts and progress made by the Asia Pacific Forum of National Human Rights Institutions (APF) concerning migration issues, in particular, trafficking of women and children Welcoming the efforts made by the ASEAN NHRI Forum to contribute to the development and establishment of an intergovernmental human rights body in accordance with the ASEAN Charter, and the contributions of the Civil Society Task-Force on ASEAN Migrant Workers to the ASEAN Declaration on the Protection and Promotion of the Rights of Migrant Workers (Cebu 2007), Recalling the key concerns and issues identified by the Jakarta Process Review related to existing legal, institutional, and policy frameworks in the countries studied which are considered detrimental to the human rights of migrants in an irregular situation and migrant domestic workers, Expressing solidarity with the Jakarta Process Review—Appeal to the Asia Pacific Forumin its Call for Regional Standard-setting on the Human Rights of Migrants in an Irregular Situation and Migrant Domestic Workers (Kuala Lumpur 2008), Noting the importance of inter- and intra-regional relationships among NHRIs given the nature of migration and the capacity to share information and support when dealing with migrants and specific migration issues, Reaffirming the need for increased cooperation and sharing of information and best practices, including the development of specific joint programs and mechanisms, among NHRIs at regional and international levels, Noting with great interest similar calls for cooperation among NHRIs in other regions, including the creation of mechanisms for communication and coordination between human rights institutions, a call for NHRIs to engage in transnational cooperation and to make use of their networks to communicate on migration issues, and to make recommendations to strengthen cooperation between NHRIs to ensure the promotion and protection of all human rights of migrants, Welcoming the outcome of the Seoul Conference on Human Rights of Migrants and Multicultural Society (Seoul, 10-12 Nov. 2008) which recognizes the urgent need to develop strategies and action-oriented guidelines to strengthen and promote cooperation among NHRIs in Asia in addressing challenges identified during the Conference, Recalling the Seoul Commitment to “promote, where relevant, regional cooperation among NHRIs” in order to implement the Seoul Declaration of the 7th International Conference of National Institutions for the Promotion and Protection of Human Rights held in Seoul on 14-17 September 2004, Welcoming the establishment of the Seoul Process as a framework for cooperation among NHRIs and other stakeholders, adopts the following guidelines on the cooperation of NHRIs for the promotion and protection of the human rights of migrants in Asia. Section I Principal Areas of Action NHRIs in Asia are encouraged to take action in the following areas for the purpose of promoting and protecting the human rights of migrants: International Human Rights Mechanisms and Processes 1.Standard-setting on women migrant workers at the international and regional level, 2.Promoting universal ratification of the International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families, particularly among destination countries in Asia, 3.Promoting universal ratification and implementation of all other international UN human rights treaties and ILO conventions relevant to migrant issues, 4.Promoting ratification of the 2nd Palermo Protocol to the UN Convention against Transnational Organized Crime, 5.Ensuring regular reporting on and implementation of the concluding observations and recommendations associated with the human rights treaties above, 7.Strengthening of cooperation with the Special Rapporteur on the human rights of migrants and other Special Procedures established by the Human Rights Council (HRC) 8.Participating in the Universal Periodic Review (UPR) mechanism and ensuring implementation of its recommendations 9.Participating in the Global Forum on Migration and Development, 10.Enhancing cooperation with the Office of the High Commissioner for Human Rights (OHCHR), particularly with the National Institutions Unit (NIU) and Asia and the Pacific Unit 11.Enhancing cooperation with international organizations, in particular, International Labor Organisation (ILO), UN High Commissioner for Refugees (UNHCR) and International Organisation for Migration (IOM) 12.Institution-building related to the ASEAN Declaration on the Protection and Promotion of the Rights of Migrant Workers (January 2007), National Implementation of International Human Rights Standards 13.Encouraging and supporting establishment of independent NHRIs in conformity with the Principles Relating to the Status and Functioning of National Institutions for Protection and Promotion of Human Rights (Paris Principles), 14.Strengthening of NHRI mandates with regard to the human rights of migrants, 15.Developing and implementing National Action Plans (NAP) that include the human rights of migrants, and ensuring the implementation of such NAPs, 16.Harmonizing national legislation and policies in conformity with international human rights standards, 17.Improving policy coordination among government agencies in addressing issues of migration based on human rights principles, 18.Enhancing cooperation and collaboration with relevant government agencies, 19.Enhancing cooperation with stakeholders such as NGOs, academia, media and other civil society actors, 20.Ensuring participation of migrants in the policy decision-making process and policy implementation, 23.Promoting and ensuring equal access to education, medical, social security, judicial and legal services for migrants and their family members, Education, Training, and Awareness-building, 24.Developing human rights education and training modules and materials in all appropriate languages, 25.Campaigning for raising public awareness of the human rights of migrants, 26.Educating migrants on their rights at the time of pre-departure in the country of origin and at post-arrival in the country of destination, 27.Educating and training government officers on human rights related to migrants, particularly immigration officers, the police and correctional officers, 28.Promoting a culture of human rights, meaning the promotion of tolerance, respect for cultural diversity, and inter-cultural understanding in order to combat racism, racial discrimination, xenophobia and related intolerance, 29.Carrying out collaborative studies, survey and research on issues related to migrants, Migrant Workers 30.Improving national policies on employment of foreign laborers and personnel, including company recruitment activities and the activities of recruitment agencies, in conformity with international human rights standards, 31.Establishing a set of minimum standards on working conditions and workplace policies including safety and health, overtime and irregular hours, fair and adequate pay, clear information regarding work duties, the reduction of language barriers, respect for cultural and religious beliefs in the assignment of work duties and schedules, job termination and forceful dismissal, 32.Taking legislative initiatives aimed at greatly increasing the penalty for a violation of national labor and employment laws, or recruitment policies, 33.Establishing a set of minimum standards for the living conditions associated with employer supplied housing for migrant workers, and their families, where appropriate, including requirements for the provision of basic amenities, such as shelter, running water, heat, and lighting, 34.Taking legislative and administrative initiatives aimed at securing the application of domestic labor and employment laws to migrant workers in a manner that is equal to that of the national labor force including the provision of medical services, participation in the national pension system, worker’s accident and disability compensation, the right to join and form unions, and the right to legal remedies for unpaid wages, 35.Enhancing the right to change employer, especially in cases of exploitative or otherwise unjust working conditions, 36.Conducting joint research, development, and publication of model contracts for migrant workers which are industry specific and take into account relevant national contract laws, 37.Monitoring the human rights situation of irregular migrant workers during periods of intensified government enforcement of national immigration laws and increased detention and deportation of irregular workers, including amnesty and repatriation actions, 38.Enhancing the right of asylum seekers to support themselves through temporary employment or other adequate means of livelihood while awaiting determination of their status, 39.Ensuring decriminalization of the victims of smuggling and trafficking, Migrant Women 40.Securing the safety, security and dignity of women migrant workers in their intended workplace before departure from the country of origin, while in transit, and after arrival in the country of destination, 41.Setting minimum standards applicable to the employment and treatment of women domestic workers, including a minimum entitlement to one day of rest per week, 42.Improving national policies regarding international marriage brokerage activities, including specific policies aimed at preventing, identifying, and, where appropriate, prosecuting activities that mislead women into marriage or violate the human dignity of women by inhuman and degrading treatment, Children of Migrants and Child Migrants 43.Securing the right to education regardless of the immigration status of the children themselves or their parents, 44.Preventing discrimination and prejudice against the children of migrants and international marriages, and child migrants, in schools and in the classroom, 45.Promoting cultural and social integration regarding the children of nationals abroad, and social and educational reintegration of the children of returning migrants, 46.Encouraging birth registration and granting of the appropriate nationality under the laws of both the country of origin and the country of destination, in particular the registration of newborn children of irregular migrants without fear of arrest or detention, 47.Enlarging social service programs that grant financial assistance for child care and medical services regardless of immigration status, 48.Protecting human rights of children of migrants in detention facilities. Section II Working Structure Seoul Process The Seoul Process, which is a framework for cooperation among NHRIs and other stakeholders with the purpose of implementing the Plan of Action set forth in Section III of these Guidelines, is hereby established in accordance with the following: 49.The National Human Rights Commission of Korea (NHRCK) is appointed as the convener of the Seoul Process, 50.The convener is requested to organize, in cooperation with the APF, the next meeting of the Seoul Process to be held in 2009 (Seoul Process 2009), 51.The convener shall cooperate closely with the Jakarta Process, which focuses on the human rights of migrants in an irregular situation and migrant domestic workers, 52.The APF is requested to provide necessary assistance and support, including financial, for the Seoul Process in relation to the implementation of these Guidelines, 53.The UN Special Rapporteur on the human rights of migrants shall be invited to join the Seoul Process 2009, 54.A focal point within each NHRI shall be created to serve as the primary channel for all cooperative efforts related to the implementation of these Guidelines, 55.Interested NHRIs are encouraged to enter into MOUs on issues of mutual concern regarding the promotion and protection of the human rights of migrants, 56.Interested NHRIs are encouraged to develop staff exchange programs to address issues of mutual concern in relation to the implementation of these Guidelines, 57.A proposal shall be made to the APF Councilors for the creation of a Working Group on Migration as decided at the 8th International Conference of National Institutions for the Promotion and Protection of Human Rights (Santa Cruz, Bolivia, 24-26 Oct. 2006), 58.A proposal shall be made to the APF Councilors to consider taking up the issue of migration as the Advisory Council of Jurists (ACJ) theme of study and recommendation for the year 2009/10. Section III Plan of Action NHRIs in Asia are encouraged to undertake the following actions in coordination with the Seoul Process for the purpose of promoting and protecting the human rights of migrants: 59.Development of mid-term action plans for the implementation of these Guidelines at the regional level, 60.Development of concrete action plans in line with these Guidelines as an integral part of each NHRI’s annual work plan from 2009 onwards, 61.Monitoring of the human rights situation of migrants in each country, 63.Taking of joint action, where appropriate, to address issues of mutual concern that require an internationally coordinated response, 64.Production of an annual report on the implementation of these Guidelines, 65.Establishment of joint research projects among NHRIs in Asia on the causes, processes and consequences of international migration, 66.Initiation of an international campaign for the universal ratification of the International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families, and other related human rights treaties, 67.Working towards the inclusion of migration initiatives in the National Action Plans (NAP) of the NHRIs’ respective governments, 68.Development of training modules and materials related to the human rights of migrants, 69.Initiation and implementation of public human rights campaigns on migrant issues aimed at awareness building, 70.Initiation and implementation of human rights education and training programs for migrants at the time of pre-departure from the country of origin and at post-arrival in the country of destination, 71.Initiation and implementation of human rights training programs for government officers, in particular, law enforcement agencies, including immigration, police and detention facilities, 72.Monitoring and participation in the regional standard-setting and institution-building processes related to the human rights of migrants.
The Electronics and Communication Engineering program was started in the year 2009 to produce graduates with strong fundamentals in electronics and communication domain. The course is an outcome based structured to help students to achieve the goals. The department has maintained swiftness with the cutting edge technology through well-equipped laboratories, tie ups with industries and research groups . It has good infrastructure and is well equipped with full-fledged laboratories. It has dedicated and qualified faculty members to ensure high standard of teaching – learning and evaluation processes. The faculty members have experience in teaching and as well as industries. The department has made a name for itself, among the top colleges, at the University level, through its performance in the examinations, placements, seminars, conferences and industry interactions. Department does not only focus on Engineering the Engineers in technicality but also engages and support them through technical and non-technical association activities. Department believes, supporting the students through career and life related talks and empowering them with latest technologies will nurture them to superior growth and makes them market ready. Vision To impart quality and responsive education in electronics and communication engineering for the overall development of students to meet the global challenges. Mission M1: Adopt a transformative teaching-learning pedagogy to empower our students with domain knowledge and practical skills in resonance with technological developments. M2: Impart multi-disciplinary knowledge, and train our students to develop the relevant professional competency skills. M3: Create a cogent ambiance to comprehend the technical and management principles, and the efficacy of lifelong learning. Courses offered by the Department Bachelor of Engineering (B.E) - Electronics and Communication Engineering HOD’S MESSAGE Welcome to the department of electronics and communication engineering at Angadi Institute of Technology and Management, Belagavi. The Department was established in the year 2009 with the aim of providing leadership in the field of Electronics & Communication Engineering with an intake of 60 students. The department is located in a sprawling environment with a state of art facilities and highly qualified faculty. The department works with the objective of addressing critical challenges faced by the Industry, society and the academia. Perhaps even more important is our unceasing commitment to our students, helping them to learn, grow, develop, and achieve their goals in their pursuit to excel in their professional career. The department faculty work with excellent team spirit in different technical team like Communication, Signal processing, VLSI, Embedded System, Wireless Sensor Network areas and many more areas. The department strives to provide a conductive environment for the students to develop analytical and practical skills and apply them to real practical problems. To motivate the students the department organizes regular training in state of art software & hardware field. Electronics and communication engineering is a dynamic and exciting area that provides excellent career opportunities in various areas of technology. The department faculties are committed to teach our students the fundamental concepts and the latest trends via smart teaching and learning process. The students are also taught with critical thinking and problem-solving skills as they accommodate their future with confidence. In addition to classroom teaching, the students are guided and motivated to practically implement the principles learnt in classrooms through experimentations in the laboratories and through innovation centers on Robotics and Internet of Things. The students can include them under different type of club like Circuit Design, Music, Arts, and Sports etc. We welcome you to the Electronics and Communication Engineering Department as undergraduate student and we hope to be part of your success. Prof. Raviraj Chougala Program Educational Objectives (PEOs) PEO 1: To provide our graduates with core knowledge of electronics , programming, scientific and engineering fundamentals necessary to formulate, analyze and solve electronics and IT related engineering problems, and to prepare them for industry related technologies. PEO 2: To develop an ability to analyze the requirements and technical specifications of hardware , software and firmware to articulate novel engineering solutions for an efficient product design. PEO 3: To provide exposure to the emerging technologies, and to provide relevant training to work as teams on multidisciplinary projects which in turn may develop communication skills and leadership qualities. PEO 4: To prepare our graduates to pursue professional career adopting work values with a social concern to bridge the digital divide, while meeting the requirements of Indian and multinational companies. PEO 5: To make our graduates aware of the efficacy of life-long learning, professional ethics and practices, so that they may become global leaders. Program Outcomes (POs) PO 1: Engineering Knowledge: Apply the Knowledge of Mathematics, Science, Engineering Fundamentals, and an Engineering specialization to the solution of complex Engineering problems. PO 2: Problem Analysis: Identify, Formulate, Review research literature, and analyze complex engineering problems reaching substantiated conclusions using first principles of Mathematics, natural sciences and engineering sciences. PO 3: Design/Development of solutions: Design solutions for complex engineering problems and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and environmental conditions. PO 4: Conduct investigations on complex problems: Use research based knowledge and research methods including design of Experiments, analysis and interpretation of data, and synthesis of Information to provide valid conclusions. PO 5: Modern tool usage: Create, select, and apply appropriate technique, resources, and modern engineering and IT tools including prediction and modeling to complex engineering activities with an understanding of the limitations. PO 6: The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess society, health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice. PO 7: Environment and sustainability: Understand the impact of the professional engineering solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable development. PO 8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering practice. PO 9: Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and in multidisciplinary settings. PO 10: Communication: Communicate effectively on complex engineering activities with the engineering community and with society at large, such as, being able to comprehend and write effective reports and design documentation, make effective presentations, and give and receive clear instructions. PO 11: Project management and finance: Demonstrate knowledge and understanding of the engineering and management principles and apply these to one’s work, as a member and leader in a team, to manage projects and in multidisciplinary environments. PO 12: Lifelong learning: Recognize the need for, and have the preparation and ability to engage in independent and life-long learning in the broadest context of technological change. Program Specific Outcomes (PSOs) PSO 1: Graduates trained with the core knowledge of Electronics and Communication Engineering will be ready to apply the state of art technology to solve engineering and socially relevant problem. PSO 2: Engineers who can engage in team work with professional ethics, technical knowledge and effective communication skills to address the practical issues in industry and society. PSO 3: Technologists who can analyze and design innovative projects through research and sustainable technology.
https://www.ece.aitmbgm.ac.in/
This application is a National Stage completion of PCT/EP2008/053634 filed Mar. 27, 2008, which claims priority from German patent application serial no. 10 2007 016 761.1 filed Apr. 7, 2007. The present invention concerns a method and a system for controlling and/or regulating an automatic transmission. From automotive technology it is known that with automatic transmissions having stepped transmission ratios a driving situation can arise, such that a maximum permitted motor speed is reached but too little power is available in the engaged gear. Owing to the lack of power the motor cannot generate the required traction force or the desired acceleration capacity needed for the next-higher gear step. The driving situation described above is as a rule recognized by a driving strategy of an automatic transmission. As a result, in this driving situation an upshift is prevented. The frequency with which this driving situation occurs increases as transmission intervals per gear become larger, i.e. the driving situation occurs more frequently in transmissions with fewer gears, for example with six gears, than in transmissions with a larger number of gears, for example with 16 gears. Due to the prevention of upshifts in such driving situations, the problem arises that the vehicle's motor is run for a longer time at the maximum motor speed. When the motor is running at full-load speed, its efficiency is lower. Consequently, the fuel consumption rises considerably. Moreover, the environment is affected adversely by larger quantities of exhaust gas emission and by greater noise production. In addition, driver acceptance is reduced since in the driving situation described, the driver is expecting an upshift. For example, from DE 197 40 648 A1 a device for controlling gearshifts in an automatic transmission is known. In this device a means for preventing upshifts is provided, which only permits or carries out the upshift after a given time. When the current motor speed corresponds to the maximum motor speed, the upshift is prevented. However, as soon as the current speed tends to exceed the maximum speed, the upshift is initiated so that the motor speed will be reduced again. Thus, with the known device, exceeding the maximum motor speed is prevented because an upshift takes place. Accordingly the problem outlined above, namely that of insufficient power in the next-higher gear, is not solved at all by the known device, since with the known device an upshift to the next-higher gear leaves the motor unable to provide the necessary traction force and the necessary acceleration capacity. Moreover, since with the known device operation at maximum motor speed is permitted, the fuel consumption also increases. The purpose of the present invention is therefore to propose a method and a system of the type described at the start, with which, even when an upshift is prevented owing to insufficient power, a motor can be operated in as optimum a range in relation to its motor efficiency. According to the claims, the objective of the invention is achieved by a method for controlling and/or regulating a multi-step automatic transmission of a vehicle, in which an upshift is prevented on the grounds that the power limit of the vehicle's motor has been reached, so that once a driving situation in which the power limit has been reached is recognized, the motor speed is restricted to a predetermined, consumption-friendly limit value. By limiting the motor speed, as it were a more consumption-friendly full-load speed is set as the limit value upon recognition of the driving situation, so that not only is a reduction of the consumption obtained due to better motor efficiency at the established full-load speed, but also the driver's acceptance is increased. The driver acceptance is improved by the fact that owing to the limited motor speed, the driver is made aware of the power deficiency of the motor in the current gear, and will therefore not expect an upshift in this driving situation. Furthermore, by virtue of the method according to the invention the impact on the environment is reduced considerably because the quantity of exhaust emissions is smaller and the output of noise is lower. In the context of a possible embodiment variant of the invention it can be provided that with the method according to the invention, the limit value is determined as a function of specified vehicle and/or vehicle status parameters. It has been found advantageous to determine the limit value, for example, as a function of the current transmission ratio step and/or the speed of the vehicle and/or the duration of the driving situation. Other vehicle or vehicle status parameters can also be taken into account for the determination of the limit value. For example, if in a particular driving situation the motor has already exceeded the motor speed set as the limit value, the current speed can be maintained and thus determined as the limit value. It is also possible, however, for the speed to be reduced to the established limit value so as to keep to the limit value originally defined. Preferably, the limit value is set below the maximum motor speed so as to ensure more economical operation of the vehicle's motor. As soon as the driving situation again permits an upshift, it can be provided in accordance with a further development of the invention that the limit value is cancelled and the motor speed is adjusted to a predetermined level, for example to the appropriate speed for shifting, so as thereby to enable the desired shift operation to be carried out. In this situation other motor speed strategies are also possible, in particular when certain driving programs are activated in the automatic transmission. According to a related feature of the invention it can be provided that no motor speed limit value is set in the case of predetermined driving programs or driving conditions. For example, in the case of a manual shift operation or certain other selected driving programs of the automatic transmission, it is possible not to set a motor speed limit. Among these driving programs the so-termed kick-down operation can also be included. However, when the kick-down driving program is active it is also possible for a motor speed limit value to be set when the power limit is reached. But in such a case the parameters for setting the limit value should be adapted appropriately and thus changed. 1 3 Independently of the driving situations outlined at the beginning, in which an upshift is prevented, the method according to the invention can also provide that the full-load speed is limited when the highest gear is reached and/or also when in the reversing gear. The nominal highest gear-step can also be reached, for example, if a particular driving program restricts the driving range to gears to in the automatic transmission. Then, reaching the third gear is tantamount to reaching the highest permitted gear. The objective of the invention is also achieved by a system for controlling and/or regulating a multi-step automatic transmission of a vehicle, comprising a device for controlling shift processes which prevents an upshift when the power limit of the vehicle's motor has been reached. According to the invention, the system provides that on recognizing that the power limit has been reached, the device sets a predetermined limit value upon the motor speed. In this way the advantages already mentioned in the method context are obtained. Preferably, the device proposed according to the invention is used to implement the method described earlier. However, other fields of use are also conceivable. FIG. 1 shows, as an example, the power P, the torque M and the fuel consumption V plotted in a diagram as functions of the motor speed n. FIG. 2 FIG. 2 Limit The diagram shown in represents various motor speed variations. The motor speed variation indexed A occurs if the motor speed n is not limited when an upshift is prevented. The variation A, in which the full-load speed of the motor is not reduced, is shown as a continuous line. Index B denotes another speed variation, in which a limit value nis set for the motor speed n. Variation B is shown as a line with points on it. Finally, a variation C represented as a broken line is shown in , this line indicating when a shift is possible. max Speed variation A shows that because of the banner upshift, the motor speed n increases steeply until the maximum or full-load speed nof 2300 revolutions per minute has been reached. As soon as an upshift to the next-higher gear step becomes possible again, which is indicated as a rise of variation C to the value 1, the motor speed n falls suddenly and steeply until it finally reaches the value 1800 revolutions per minute. Variation A shows that the motor speed n remains at 2300 revolutions per minute for quite a long time. At this speed the fuel consumption is about 225 g/kw h. Limit Limit Limit Limit Applying the method according to the invention, when an upshift to the next-higher gear step is not possible because of insufficient power, the motor speed n is set at a predetermined limit value n. This speed management in accordance with the proposed method is represented by the speed variation B, shown as a line with points along it. In the case of variation B there is at first a speed increase similar to that of variation A. However, when the set limit value nof the motor speed, which in the example shown is established at 2100 revolutions per minute, has been reached, the speed remains constant. Only when an upshift is permitted again and the limit value nis cancelled, can the motor speed n be increased for a short time to a predetermined shift speed in order to carry out the shift operation. In the next-higher gear the speed falls to approximately 1750 revolutions per minute. At the limit value nof 2100 revolutions per minute the fuel consumption is approximately 220 g/kw h. Limit A comparison of the two variations A and B makes it clear that by using limit value nfor the full-load speed as in the method according to the invention, the speed is kept lower for a longer time. Consequently, when the method and system according to the invention are used, fuel consumption is reduced by approximately 2.5%. Indexes A Speed variation with no imposed limit B Speed variation with an impose limit C Shift P Power M Torque V Fuel consumption n Motor speed Limit nLimit value max nMaximum motor speed FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS BRIEF DESCRIPTION OF THE DRAWINGS Below, the present invention is described in more detail with reference to the drawings, which show: FIG. 1 : Representation of an example motor performance diagram, in which power, torque and fuel consumption are shown as functions of the motor speed; and FIG. 2 : Diagram showing examples of speed variations with and without limitation of the motor speed.
Richard Epstein: Big Obstacles in Trumpcare’s Path – Newsweek As the proposed American Health Care Act ( AHCA ) moves to the Senate, the Republican agenda to “repeal and replace” Obamacare faces major obstacles. Two of the knottiest provisions in the GOP’s complex legislative package deal with how insurance carriers may set premiums for their customers. As a general matter, the cost of health care rises with age: older people cost more to insure than younger people, often a lot more. Are insurance carriers entitled to take those differences into account in setting rates? The second question involves the supply of insurance for persons who have preexisting conditions—a trait that makes them more expensive to insure. Everyone agrees that the Affordable Care Act is under stress as major insurers continue to exit the individual market because of an inability to cover their costs. But there is widespread disagreement about what, if anything, to do next. As is so often the case, it is easy for legislation to impose new regulations on the insurance markets. It is a lot harder to figure out how, if at all, to undo the mess. The House bill faces rough sledding in the Senate, and the prognosis for sensible reform is bleak. To see why, it is necessary to go back to first principles. Under a competitive market system for individual insurance, all individuals have to pay the full freight to get coverage for their potential risks, because cross-subsidies between different classes of customers can never survive. In order to give some individuals lower rates, other individuals have to be charged amounts that exceed the accurate estimated costs of their conditions. Since other alternative insurers are by definition available, these overcharged individuals will migrate to another insurer that does not impose the subsidy surcharge. Any insurer that persists in undercharging their high-risk patients will therefore be on the rapid road to bankruptcy. Even without these cross-subsidies, the overall system remains in equilibrium because all potential insureds, whether high- or low-risk, only enter into contracts from which they are net winners: therefore, even high-risk customers, paying high premiums, get the primary advantage of insurance by being able to average out their income over future uncertain states of the world. The great but unappreciated advantage of this market system is that it is perfectly steady. No one has to worry about who else is in their insurance pool, because their rates are stable in the absence of cross-subsidies. In this type of system, individuals have an incentive from an early age to enter into long-term coverage arrangements that will protect them through guaranteed renewal at specified rates in the event that their health condition turns for the worse. But now that the Affordable Care Act embedded these cross subsidies, it is better to tilt at windmills than to try to undo the rock-solid political consensus against a pure market solution on both sides of the political aisle. Even the Republican plans preserve major subsidies. But these are not large enough to satisfy the critics. Thus the American Medical Association, whose members benefit from a continued influx of public money into the system, hit both points hard: The bill passed by the House today will result in millions of Americans losing access to quality, affordable health insurance and those with pre-existing health conditions face the possibility of going back to the time when insurers could charge them premiums that made access to coverage out of the question. There is indeed little doubt that the legislation will reduce the total subsidies now moving from rich to poor people. During his campaign, then-candidate Trump wrote that he “does not believe health insurance carriers should be able to refuse coverage to individuals due to pre-existing conditions.” That leaves open the question of how much more, if anything, high-risk patients can be charged relative to others. The key provision of the AHCA fudges this particular question, by saying that its legislation does not allow health care insurers “to limit access to health care coverage for persons with preexisting provisions.” But that provision would not prevent them from increasing the cost of coverage so as to reduce or eliminate the amount of the implicit subsidy. In addition, there is also fierce resistance to those provisions in the AHCA that would allow the states to raise the community rating differential so that insurers could increase from three-fold to five-fold the rate differential between their youngest and oldest customers, in order to reduce that cross-subsidy. Such intense opposition to these key AHCA provisions is indicative of the huge difficulties that it takes to manage cross-subsidies. If these are kept too large, as they are under the current Affordable Care Act, then insurance in the individual market will implode because young people will flee from plans that offer them a raw financial deal. But if the level of the subsidy is reduced, then the cost of existing coverage for older insureds will necessarily skyrocket, so that they will be forced to leave their plans. One possible way to control this problem is for the federal government to cover the costs of the needed subsidies from general revenues, which would make their cost explicit, by putting it on the budget, allowing for a political debate about the size of the subsidy. But elected officials are reluctant to raise taxes, and even when they do, there is a real question of whether the funds set aside are sufficient to cover any shortfall that might exist. The simplest way to attack the size of the subsidy is to reduce the set of benefits that are included in the health plan. On this score, the rich set of essential medical benefits under the ACA are far more extensive than those provided in any voluntary market, which is a good sign that they should be pared back in ways that make coverage more affordable. Private insurance companies in an unregulated market can alter their product mix in response to changes in cost and demand. But government programs face huge rigidities in this regard, because every type of current service supplier will lobby furiously to make sure that its benefits survive the financial axe. The new bill does not attack this problem directly, but allows for states to gain waivers from the essential benefits, inviting a massive political battle as to which particular benefits will be cut and why. At the same time, it is hard to see what progress can be made in dealing with preexisting conditions. Thus under the House version of the AHCA, states may seek waivers that allow insurers to charge more for preexisting conditions, but only if they set aside sufficient funds to help those hurt by the rise in market rates. The AHCA contains $138 billion to deal with the issue, to be divvied up among 50 states to help them reach their goal. But, as with essential minimum benefits, this provision raises at least as many questions as it answers. It is never clear whether these funds are sufficient to cover the shortfall, and, if so, how they are to be allocated across the states. Nor is it clear just how much funds any state must commit to the program in order to make the waiver good. The AHCA does not set up a competitive market in which each firm makes its own pricing system. What it does is propose an alternative system with a different set of coverage formulas and cross-subsides that no one can figure out how to price in advance, which accounts for some of the intense opposition to the legislation. A better approach toward this issue is to control the opportunism that exists right now in dealing with these conditions. The ACA gives individuals a wide latitude of periods over which they can sign up. People who learn that they have preexisting conditions can thus swoop down on the system in order to receive some needed surgery or expensive drug treatment, only to opt out once their course of treatment is over. The net result is a raid on the treasury that makes it harder for others to remain long-term participants within the system. The current legislation has a proposal that permits a 30 percent price increase that lasts for one year if the gap between jobs is 63 days or longer. But again, there is a serious Goldilocks problem. No one knows whether the 30 percent increase is too large, too small, or just right. Nor is there any way to figure this out in advance before the legislation becomes law. It is discouraging, to say the least, that the Republican project is so flawed. Obamacare’s dismantling of the voluntary market is not easily repaired, given that all has happened since 2010. The overall lesson is that politicians should be aware that the evident pitfalls of existing markets might well be far smaller than those of the untried legislative systems that replace them.
To ensure that a quality product is delivered and on schedule, every single project at Frontware undergoes the following steps: The statement of work typically comprises a description of the nature of the problem, an estimate of magnitude and a tentative required by date. This is supplied by client. When the statement of work is received, it is evaluated by Frontware, and a "Requirement Specification" is generated from it. This essentially reflects Frontware's understanding of the problem. Also, the deliverables are clearly established. Once the requirement specification is prepared, the Project Manager / Project Leader would do a feasibility analysis of the project, using an automated tool, which typically employs an estimation method such as Function Point Analysis, This analysis decides the acceptance or otherwise of the project by Frontware. This analysis is reviewed by the concerned Technical Manager. This is an administrative document, which officially "kicks off" a project. It has references to the initial pre project phase documents, and officially associates the Project Manager / Project Leader with the project. Subsequently, every reference to the project would use the internal work order number or the project ID assigned in the work order. The project plan is prepared by the Project Manager / Project Leader. The plan establishes clearly the purpose of the project, the scope of the project, and interfaces with the customer, among others. Based on the estimates made during the feasibility analysis phase, a project schedule is prepared with a projected time frame for resource utilization. Work breakdown is performed, and individual tasks are identified. Also, the different milestones, if any, are identified. The deliverables identified during the requirement specification phase are broken down to concrete units, and the acceptance criteria for each of them are established. The configuration management method is established, as also any risk management required. The mechanism of change management is established, and communicated to the customer. The project plan is reviewed by a peer Project Manager / Project Leader, and is approved by the Technical Manager. The project plan is sent to the customer for approval. Based on the project plan, the required resources are allotted to the project from the resource pool. This allocation could be partial, in that, a particular resource might not be allocated 100% to a single project, or it could be complete. The project execution phase typically involves three major activities - minor design/design changes, construction and unit testing. During the execution, a suitable project management tool is employed to translate the work breakdown into tasks, allocate them to the team members, and monitor the progress of the project. Any new design done, or all the design changes made, are reviewed by the Project Manager. Also, at the end of each unit of construction, a review / desk debugging is performed by a peer to ensure the quality of the output.
https://www.frontware.com/Statement-of-work
The invention relates to a false propaganda and governance application method and device based on an intelligent voice robot. The method comprises the steps: carrying out voice analysis of accessed signaling and media data to obtain a real-time media stream and/or text data; performing semantic comprehension and intention judgment by an intelligent voice robot according to the real-time media stream and the text data, generating a corresponding reply statement according to the semantic comprehension and intention judgment, generating corresponding call media data, and sending the call media data to a calling party; and recording false call file information each time, and taking the false call file information as a corresponding application scene during subsequent call identification. The beneficial effects of the false propaganda and governance application method are that the intelligent voice robot is introduced for voice interaction, the call time of a false propaganda source can beoccupied effectively, and the number of connection times of a victim and the like can be reduced indirectly; a false propaganda source is interfered, and the fraud success rate is reduced.
PROBLEM TO BE SOLVED: To provide a Pachinko game machine capable of further enhancing fun of a game in a performance method for executing a notice performance. SOLUTION: A Pachinko game machine is configured to allow an information output part to display a false continuation performance, namely a performance executed in linking with execution of a false variation, and to allow the information output part to display a predetermined image as reserved read-ahead performance display contents. The Pachinko game machine is configured to permit display of a false continuation performance having display contents differing from those of a false continuation performance displayable when the predetermined image is not displayed, or to relatively reduce an execution frequency of the false continuation performance as compared with when the predetermined image is not displayed when the predetermined image is displayed in the false variation executed in variation display of identification information for main games relating to reservation in that predetermined conditions are satisfied. COPYRIGHT: (C)2015,JPO&INPIT
The Children and Family Research Center (CFRC, the Center), a unit within the School of Social Work at the University of Illinois at Urbana-Champaign, is seeking applications for a Research Specialist/Senior Research Specialist. The Center was established in 1996 in collaboration with the Illinois Department of Children and Family Services to identify and monitor child welfare service outcomes, conduct program evaluations of key interventions and complete research on critical issues that impact the success of these efforts. The Center conducts child welfare program evaluations, sponsors original research and promotes evident-based practices that enhance the safety, permanence and well-being of children in or at risk of substitute care. Duties and Responsibilities: Support current research initiatives, seek and obtain funding for new research initiatives, and provide consultation to child welfare agency and Center staff on research methodology and statistics. Work within a team environment on research projects that fulfill the Center’s mission to 1) maintain a research program that is responsive to the Illinois Department of Children and Family Services responsibilities under statues and court orders and 2) contribute to scientific knowledge about child safety, family permanence and child and family well-being. In addition to collaborative work on existing Center projects, will develop and lead his/her own program of research related to one or more of these topics. Disseminate research findings through academic journal publications, research reports, briefs targets toward child welfare administrators and professionals and presentations at local, state and national conferences. The Senior Research Specialist position will have additional expectations, such as obtaining external funding and mentoring/supervising junior researchers and graduate students affiliated with the Center. Minimum Education: A Master’s is required for Research Specialist; PhD is required for Senior Research Specialist. Degrees need to be in Psychology, Sociology, Social Work or related field. Minimum Experience: At least 4 years professional experience for a Research Specialist; at least 10 years professional experience for a Senior Research Specialist. Knowledge Requirements: A strong background in quantitative or qualitative research methodologies, or both. Knowledge of and contribution to scholarly literature and research in child welfare and ability to work collaboratively in a team-oriented workplace. This is a full-time (100%), twelve month non-tenure track academic professional position. The position is grant-funded and renewal is contingent upon availability of funds. The selected applicant will work in the Urbana, Illinois office and report to the Center Director. Salary is competitive and commensurate with qualifications. The proposed start date is as soon as possible after the closing date. Please create your candidate profile at https://jobs.illinois.edu/and upload your resume, a cover letter specifying whether you are applying for Research Specialist or Senior Research Specialist, writing sample and the names and email addresses of three professional references by 03/18/2014. All requested information must be submitted for your application to be considered. For further information regarding application procedures, you may contact Jaime Waymouth at [email protected]. You may also visit http://www.CFRC.illinois.edu for additional information. The University of Illinois is an Affirmative Action/Equal Opportunity Employer. The administration, faculty and staff embrace diversity and are committed to attracting qualified candidates who also embrace and value diversity and inclusivity www.inclusiveillinois.illinois.edu.
https://socialwork.illinois.edu/research-specialistsenior-research-specialist/
Located in the beautiful setting of Imerovigli in the northwest of the island, high above the Caldera, the Grace Santorini is the perfect vantage point from which to view the famed Santorini sunsets that envelop the Aegean Sea, and the Cyclades Islands. The 5 star luxury hotel features two normal pools, one infinity pool, and black sand beaches. The hotel uses a white, monochromatic color scheme to match the rest of Santorini. Each room comes with a personal balcony. The hotel rests near the small town of Imerovigli, which surrounds guests in a sense of seclusion and privacy.
http://www.inspirefirst.com/2012/08/23/breathtaking-grace-hotel-santorini-islands/
How to Make a Glazed Ham Steak (with Pictures) Combine the glaze ingredients – sugar, maple syrup, mustard, paprika, and vinegar – in a large mixing bowl. Melt the butter in a cast iron skillet or other heavy frying pan over medium heat. Cook the ham for 2-3 minutes each side, or until it begins to brown around the edges, until it is cooked through. How to cook ham steak with brown sugar glaze? Everyone will be pleased with this Skillet Brown Sugar Glazed Ham Steak dish when you serve it at your next dinner party! It’s quick and simple to prepare, and it tastes fantastic. In a large pan, melt the butter and add the ham. Cook until both sides are golden brown. (Each side should take around 3-4 minutes.) Remove the ham steak from the skillet and set it aside to keep it warm. How to cook a ham steak in a pan? - Served with mustard, this ham steak is glazed with a sweet glaze and is a quick and easy dish to prepare! - In a large frying pan, cook the ham steaks and butter over medium heat until golden brown. - Cook for approximately 2-3 minutes per side. - Brush the steaks with the remaining ingredients after mixing them together. Cook for a further 2-3 minutes per side, or until the chicken is cooked through and the glaze is sticky. How to Glaze A Ham in a pan? - In a medium-sized mixing bowl, combine the honey, 1 tablespoon pineapple juice, brown sugar, mustard, and smoked paprika until well combined. - Continue whisking until everything is fully blended. - If you are using the oven technique, brush the glaze ingredients over top of the ham that has been placed in a baking dish coated with aluminum foil before placing it in the oven. - Reduce the heat to a low setting. How to cook a ham steak on a charcoal grill? - Begin by preparing the ham glaze, which should be heated in a small sauce pan until bubbling. - Turn off the heat and put the saucepan aside. - Place the ham steak on the grill and cook it for 2-3 minutes on each side until it is browned. - Cook for another 1-2 minutes on the other side after flipping. Place the ham steak on a dish, sprinkle with the glaze, and top with the pineapple chunks.Serve immediately. How do you glaze a ham? Spoon the glaze over the whole ham or use a basting brush to distribute it over the ham. If you have a large ham, you may need to glaze it many times. Honey or maple syrup can be used to make a glossier ham by brushing it on top of the glaze after it has been baked. Place the ham back in the oven and bake it until it is done. Do you glaze ham before or after? The glaze can be applied to the ham around 30 to 60 minutes before it is completed cooking, due to the low temperature of the environment. How do you glaze a ham without drying it out? Make sure to keep the ham covered until it’s time to eat it, and to keep the leftovers covered so they don’t dry out. After you take the ham out of the oven, baste it several times with the juices from the ham. This will aid in increasing the moisture content of your ham. Just before taking the slices from the pan to place them on the serving dish, baste them again. Does ham need a glaze? You will most likely want to glaze the ham during the last 15 to 20 minutes of baking time. If you glaze it too quickly, the sugar in the glaze may burn the surface of the cake. Each ham weighing between 5 and 10 pounds will require at least 1 cup glaze, if not more. How do you make a glaze? How to use a kiln to glaze ceramic pots - Preserve as much cleanliness as possible in your bisque-fired piece. Before you begin, use a clean sponge or a moderately moist cloth to remove any dust from the surface. - Make sure to thoroughly combine your glazes. - Make a decision on how you will apply your glaze. - Fire the glaze according to the manufacturer’s directions. How do you use the glaze packet that comes with the ham? DIRECTIONS - Preheat the oven to 350 degrees Fahrenheit. - Place the ham in a shallow roasting pan and set aside. - In a large saucepan, combine the contents of the glaze packet with 2 cups water and the brown sugar. - Meanwhile, in a small mixing basin, whisk together 1 tablespoon water and cornstarch until smooth. - Bake the ham according to the package directions, basting it with the glaze every 15 minutes until it is done. Can you glaze ham after its cooked? Is it possible to glaze a ham that has been fully cooked and cooled when it is still warm, shortly before it is served? Yes, however the best situation is to glaze a ham just before serving. This is done in order for the heat to dissolve the glaze and allow the meat to tenderize. If the glaze is left on the ham for an excessive amount of time, the flesh will become tough. Can I glaze ham the night before? Make the following ahead of time and bake it on the day of service: Make the glaze up to 5 days ahead of time – even a week or more ahead of time should be OK; Remove the skin from the ham and score it, then place it in the refrigerator until needed. On the day of baking, simply baste and bake! Why do you glaze a ham? Cooking: Because the ham has already been cooked, the glaze’s goal is to enhance the flavor of the ham by incorporating your own flavor notes and caramelizing the fat. To compensate for differences in size, you can reduce or increase the cooking time until you get the desired amount of caramelization. How do you keep ham moist? While the ham is cooking, instead of pre-bathing or basting it throughout the process, add a half cup of chicken stock, wine, water, or a combination thereof to the bottom of the pan. This will help to keep the meat tender and moist throughout the baking process. Should I wrap my ham in foil? Wrap and cover the ham tightly with aluminum foil to ensure that none of the juices escape. Placing the ham in a baking pan and cooking it for roughly 20 to 25 minutes per pound, or according to the package guidelines for cooking times, is recommended. When the internal temperature of a completely cooked ham hits 130 degrees F to 140 degrees F, it is considered done. Are all hams precooked? Briefly stated, ham that has been cooked by curing, smoking or baking is considered ″pre-cooked,″ and so does not need to be cooked further. In this case, the ham that was purchased at the deli qualifies. The majority of ham offered to customers is already cured, smoked, or baked at the time of purchase. What is ham glaze made of? GLAZE FOR HAM Honey, brown sugar, or maple syrup are the ideal ingredients to use as a glaze on a baked ham. A glaze made with either of these components would be rather exceptional since the saltiness of the ham and the sweetness of the glaze compliment each other so beautifully. Do you cover ham after glazing? Cook the ham gently in a skillet with at least 1/2 cup of water, wine, or stock, and cover it with aluminum foil to prevent it from drying out (until you’ve applied the glaze, at which point the foil comes off). Give your ham a little homemade tenderloin! How do you cook ham so it’s tender? Use foil to cover it, but don’t press it down too tightly; this will prevent it from becoming too crusty on the exterior while still retaining a little texture within. Preheat the oven to 275 degrees Fahrenheit. Other than that, don’t touch it for a minimum of eight hours. Remove it from the oven and set it aside to cool while you prepare the biscuits (you have to have biscuits).
https://theurbanbaker.com/snacks/how-to-glaze-ham-steak.html
INERTIAL AND GRAVITATIONAL MASSES: (i) Inertial masses :– It is a measure of the ability of a body to oppose the production of acceleration in it by an external force. It also measure inertia of a body. Let F be applied force on a body which produces an acceleration a Then, F = ma m = F/a (i) Gravity has no effect on the inertial mass of the body (ii) Inertial mass does not depends upon the size, shape and state of the body (iii) Inertial mass of a body does not depend upon on the presence or absence of other bodies near it. (iv) Inertial mass of a body is directly proportional to the quantity of matter contained in the body. (v) Inertial mass of the body increases with increase in velocity. m= mo/(1-v2/c2)1/2 Where mo is mass of a body at rest v is velocity of the body c is the velocity of the light in vacuum. Gravitation of Mass :– It is defined as mass of body which determines the magnitude of gravitational pull between the body and the earth. Let F be the gravitational force on a body of mass m due to earth Then F = GMm/R2 Where R is the radius of earth M is the mass of earth m = FR2/GM The mass of body determined in this way is the gravitational mass of the body. Gravitational mass is same as inertial mass in all respect, except in the method of their measurement. Inertial & Gravitational Masses INERTIAL AND GRAVITATIONAL MASSES:
http://www.quantumstudy.com/inertial-and-gravitational-masses/
It is hard to believe that Daniel Gilbert’s “Immune to Reality” could have been published in 2012. In it, he shows how our minds can distort reality, causing us to see things that are not really there. For example, he cites the case of a woman who was sure she saw a snake in her garden, when in fact it was just a piece of rope. Our minds tend to interpret ambiguous stimuli in ways that confirm our existing beliefs. This can lead to all sorts of problems, from seeing ghosts where there are none, to believing in false memories. It is important to be aware of this tendency so that we can try to correct for it. Gilbert also discusses the role of emotions in our ability to perceive reality. He argues that our emotions can bias our judgments, and lead us to misperceive what is happening around us. For instance, people who are feeling happy tend to see the world as more benevolent and friendly than it really is. On the other hand, people who are feeling sad may see the world as more hostile and difficult than it really is. Self-esteem is a person’s opinion of himself or herself; it permeates all aspects of life. People who have a high degree of self-esteem are more confident in what they do and are less prone to the pains of rejection or failure. The psychological immune system works in tandem with self-esteem by assisting individuals in coping with negative reactions or outcomes. It allows people to make mistakes without being embarrassed about them, which gives others the ability to accept their errors as well. Daniel Gilbert’s Immune to Reality explores how the psychological immune system affects people’s ability to process information and make decisions. The psychological immune system is a mental tool that allows people to protect their self-esteem. It helps them rationalize their choices, find meaning in their experiences, and interpret events in a way that preserves their self-esteem. The psychological immune system is activated when people are faced with rejection, failure, or criticism. It allows people to cope with these negative experiences by distorting reality. For example, when people are rejected, they may blame the other person for being unworthy of them. When people fail, they may blame the situation for being unfair. And when people are criticized, they may attack the criticizer. However, overindulging children with positive reinforcement might result in developing a false sense of self. When children are given awards for simply participating and not for winning, it gives them the idea that they don’t have to try their best to receive recognition. In some cases, when children grow up with a false sense of self, they tend to become narcissistic adults. Narcissism is defined as “an inflated sense of self-importance and an extreme preoccupation with themselves” (Gilbert, Immune To Reality). People who are narcissistic often seek constant admiration and have a strong sense of entitlement. They need to be the center of attention and feel like they are better than others. A study was conducted with children who were raised in homes where they were overindulged. The study found that those children were more likely to become narcissistic adults. In Gilbert’s essay, he states that “narcissists are immune to reality” (Gilbert, Immune To Reality). This means that narcissists are not able to see the world clearly and tend to live in their own bubble. Narcissism has become more prevalent in today’s society due to social media. Social media platforms such as Instagram, Facebook, and Snapchat have given people a way to share their lives with others and seek validation. People are constantly posting pictures of themselves and their accomplishments in order to receive likes and comments. The validation people receive from social media can be addicting and can lead to narcissism. In Twenge’s essay, she states that “narcissism is epidemic in the United States” (Twenge, An Army of One: Me). She attributes the rise in narcissism to individualistic culture and social media. While it is good to have high self-esteem, people need to be aware of the dangers of becoming too narcissistic. Narcissism can lead to a false sense of self and an inability to see reality clearly. It is important for people to learn how to live in reality and accept themselves for who they are. A youngster’s self-esteem is a lot like their bank account: it only has so much money in it, and if you don’t spend it fast enough, then one day you might wake up with nothing. During this time, when a child’s self-esteem is being enforced, he or she should be naive and accepting. However, the curriculum should have a limit. In an attempt to correlate student growth with the use of self-esteem programs, they should move in opposing directions. As children get older, the application of this curriculum should be reduced. If the program is correctly applied , those kids will be able to grow and confront life’s problems. Encouragement results in inspiration and motivation. A student that is not afraid to face the reality of life is a student that will be successful. It has been said that children are like sponges, they absorb everything around them. If this is true then it would make sense to have a curriculum in schools that enforce a high self-esteem. After all, if children are going to be absorbing the messages that we send them then why not send positive ones? However, there is such a thing as too much of a good thing and this is where the problem with self-esteem programs lies. There needs to be a balance between teaching children to love themselves and preparing them for the realities of life. Self-esteem programs should not be completely eradicated from schools, but they should be used in moderation. If these programs are used correctly then they can be beneficial to students. However, if they are overused then they can do more harm than good. It is important that children have a healthy self-esteem, but it is also important that they are prepared for the realities of life. It comes from the aid of an external force, normally from someone else. It provides a morale-boosting message to those who are down and out. When people are on the verge of giving up or are placed into a tough position, encouragement is usually given. Certainly, self-esteem is a key factor in a youngster’s ability to succeed not just in school but also in their future lives. Immune to reality is a phrase that is often used to describe people who are living in their own world, and are not affected by what is happening in the real world. Daniel Gilbert is a professor of psychology at Harvard University, and he has written a book called Immune to Reality: How Denial Makes Us Dumb, Sick, and Poor and prevents progress. In his book, Gilbert argues that people are often immune to reality because they cannot handle the truth, or they do not want to believe it. He says that this immunity can make us sick, poor, and prevent us from making progress. Gilbert argues that people have a natural tendency to deny reality when it is inconvenient or uncomfortable. For example, people who smoke are often in denial about the health risks of smoking, because they do not want to believe that their habit is harmful. Similarly, people who are overweight may deny the health risks of being overweight, because they do not want to change their lifestyle. Gilbert says that this denial can be very harmful, because it prevents us from taking action to improve our situation. Immune to reality can also make us poor. Gilbert argues that people who are in poverty often deny the reality of their situation, because they do not want to believe that they are poor. This denial prevents them from taking action to improve their situation, and as a result, they remain poor.
https://answerpoint.org/daniel-gilbert-immune-to-reality/
Liverpool are beginning to look like the real deal following an incredible year of progress in 2018 but there remains one weak link in the side restricting their progress, namely Jordan Henderson. The captain of any football team should be a symbol of exemplary determination, a figure who commands unconditional respect among the fanbase and within the squad, a player who the side depend on and require to reach their supreme best. Jordan Henderson is the player who dons the prestigious captains armband at Liverpool, when he is selected in the starting eleven that is, but he seldom manages to encapsulate the aforementioned qualities. Sir Alex Ferguson’s damning indictment of Henderson’s running style, which he revealed in his 2013 autobiography, holds a crushing weight of validity more than five years on. Henderson, for all his quality, is far from a top player and with a figurehead of Virgil Van Dijk’s reputation residing within the squad his credentials to retain the captaincy are rapidly diminishing at Anfield. And when you consider the breadth of midfield options Klopp has at his disposal, he may be wise to offer him a way out of the club in the summer transfer window. Naturally the return of Alex Oxlade-Chamberlain is a factor worth considering for Liverpool, but the progress Marko Grujic made in 2018 with both Cardiff City and Hertha Berlin suggests the Serbia international could slot into Henderson’s position in a progressive-minded decision from Klopp. Grujic has made just seven appearances at Hertha Berlin this season, with injury problems limiting his involvement thus far, but his brief time on the pitch has squeezed a sensational superlative out of head coach Pal Dardai (via Bundesliga.com). “I’ve been at Hertha for 22 years,” said the 42-year-old at his press conference after the Frankfurt win. “This isn’t meant as an insult to anyone else, but Marko is by far the best midfielder I’ve seen in my time at the club.” As far as compliments go they don’t get much more emphatic, and it’s one which hints it would be premature to write Grujic off as a future mainstay in the heart of Liverpool’s midfield. Like Henderson, Gujic is naturally suited to a box-to-box midfield role, where is allowed the freedom to bully opponents with his physical prowess and stride forwards in an elusive effort to make a decisive impact in and around the 18-yard-box. Sure, Grujic lacks Premier League experience and he is far from the complete package, but with an array of talented midfielders to learn from in the shape of Naby Keita, Fabinho, Georginio Wijnaldum and Oxalde-Chamberlain he is bound to improve at an exponential rate. If Klopp fails to make a brave decision by promoting the 6’3″ talent to the first-team next season, potentially at Henderson’s expense, then Liverpool could let a rare gem slip through the net when his contract expires in June. Liverpool fans – thoughts? Let us know below!
https://www.footballtransfertavern.com/premier-league/opinion-marko-grujic-could-be-a-shock-replacement-for-jordan-henderson/
If one were to look the word ‘family’ up in the dictionary or on the internet, there exists no one definition. With single-parent households on the rise, fewer children being born, and a return to multiple generations residing in one residence, the ever-changing family makeup has led to a modern family that defies classification. Flux, adaptation and reconstruction are some of the only words that are truly consistent when one tries to define families throughout America’s history, and this article will explore three of these familial constructs, along with some of the complications and benefits associated with them. One of the more contemporary familial constructs today is created when an individual, who is not necessarily a blood relative, forms an extremely close relationship with a child and essentially assumes the role of parent. This type of family formation, otherwise known as a psychological parent relationship, occurs when an individual provides necessary support to a child while ensuring that their best interests are met. While the New Jersey Supreme Court has long held that there exists a presumption in favor of a “natural parent over a third party seeking custody of a child,”1 the courts have also recognized that a parent’s rights to his or her child are not absolute rights; there exists the right of the state, under its parens patriae authority, to protect a child’s wellbeing.2 Thus, while a parent is presumed to come before all others when exploring issues concerning the care and custody of a child, there are certain circumstances where a parent’s rights can be limited and/or nullified and a family court can be called upon to “decide issues of custody, visitation, child support and myriad other aspects of domestic relations.”3 While a parent’s showing of unfitness, abandonment, or gross misconduct would obviously meet the criteria for the state to employ its parens patriae authority, a fourth basis, ‘exceptional circumstances,’ does not even require a showing that a legal parent is unfit. The exceptional circumstances standard is established on the possibility of harm to the child. Even if a legal parent is deemed by a court to be a ‘fit parent,’ a showing of exceptional circumstances can rebut the above-noted presumption in a custody dispute with a third party, and the potential for serious psychological harm to the child, not the parent’s unfitness, could deprive a legal parent of custody. Within the category of exceptional circumstance is the legal construct known as psychological parent. When there exists a “custody dispute between two fit parents, the best interest of the child standard controls because both parents are presumed to be equally entitled to custody. The child’s best interest rebuts the presumption in favor of one of the fit parents.”4 Conversely, if a third party is seeking custody of a minor child, as is the case in a psychological parent dispute, the same legal standards do not apply. In a custody dispute between a fit parent and a third party, a two-step analysis is the controlling legal standard. “The first step requires application of the parental termination standard or a finding of ‘exceptional circumstances.’”5 The parental termination standard mandates a showing, by clear and convincing evidence, of “gross misconduct, abandonment, unfitness, or the existence of ‘exceptional circumstances,’ but never by a simple application of the best interests test.”6 The courts have also explicitly acknowledged “that even if parental rights cannot be terminated on statutory grounds, ‘exceptional circumstances’ based on the probability of serious psychological harm to the child may deprive a parent of custody.”7 The courts have held that the following four prongs must be fulfilled to determine if there exists a psychological parent relationship between a third party and a child: Only after all of the four prongs have been satisfied under the exceptional circumstances test can/will the best interests standard be applied as step two, in an effort to determine what is in the best interests of the child and what role the psychological parent will play in the child’s life. According to the 2010 U.S. Census, New Jersey had approximately 160,000 multigenerational households, in which three or more generations of a family share a home. That number, which accounts for five percent of all households in New Jersey, rose about 10.5 percent from the 2000 U.S. Census.8 Given this statistic, possible psychological parent relationships are becoming easier to create when compared with prior generations. While at first blush the idea of sharing custody with a third party may seem like a ‘fit’ parent’s worst nightmare, as this could possibly create severe infighting between two fit legal custodial parents while also creating confusion for the child as to who is in the primary parental role and how to establish boundaries, in certain situations the presence of a third-party psychological parent can be a favorable arrangement for both the biological parent and the children. For example, many psychological parent cases stem from the loss of one of the child’s biological parents (natural death, overdose, abandonment, etc.). While the initial shock of the loss of a biological parent could be problematic for the child, the presence of a third-party and the bond they have with the child could mean the difference between recovery from the loss of the biological parent and the complete breakdown of the family. It is during these times that third parties can offer their financial support (i.e., moving the impacted family in with them or moving in with the impacted family, providing daycare for the children to help defray the exorbitant cost borne by the biological parent, etc.). Moreover, not only can these third parties provide financial support, they can also furnish a level of emotional support through the grieving process and in connection with unforeseen family complications that may arise as a result of the modification of the familial relationship. While the ‘traditional’ family model may be one of the easiest to recall for some, studies show that “only 22 percent of households in the Garden State are married couples with children under 18 years of age.”9 Statistics also show that many of the nation’s families have shifted away from the biologically bonded family, and it may be surprising to some to learn that approximately 1,300 new stepfamilies are forming every day.10 While this form of blended family includes children of a previous marriage, it is interesting to note that Americans get married, get divorced, and choose to cohabit more than any other Western society, which accounts for the above staggering numbers.11 The most recognizable benefit when comparing the ‘traditional family’ to a blended one via a stepparent formation would be a possible superior financial stability in both the custodial parent and non-custodial parent’s households. As many family law practitioners know, after a divorce and/or separation, the initial financial burden of trying to maintain the status quo of one household, while trying to support a second household on the same income is next to impossible. However, once a stepparent is added into the equation, the income of the new third party can transform a struggling custodial parent, who is finding it difficult to keep the roof over their child’s head, into an individual who now has the means to meet their child’s needs. Moreover, once a stepparent has been added into the equation, the non-custodial parent who is struggling to keep up with support payments, while also trying to support themselves, could experience a significant reduction in stress (financial and/or emotional) when the new spouse’s income is being utilized to cover household expenses. While the above benefits may not apply to all families developed through stepparent formation, the idea of the supplemental income of a stepparent greatly increasing the quality of life for all involved can be transformative. Although the idea of a blended family may sound like a dream come true for some single-family households, a new stepparent, especially one with no biological ties to the children, could possibly create a multitude of expected and/or unanticipated problems within the original family unit. One of the most common, both before and after the formation of the new stepparent family, could be the sense of animosity towards the stepparent from the child’s biological non-custodial parent. Although the animosity of a non-custodial parent towards a stepparent can be a considerable barrier to a healthy familial relationship, animosity is not always unwarranted. For example, many times the negative feelings one parent has towards a stepparent is not out of jealousy stemming from the new and prospering stepparent child relationship, but from the child’s stepparent overstepping their particular family role. Often, a family law practitioner will hear a client say: “He/She is not their parent…I AM THEIR PARENT!” while dealing with a contested custody or parenting time matter. These statements are further amplified by the non-custodial parent when they start to see the stepparent, whom they vehemently loath, begin to take on the responsibilities they think they should be handling (i.e., taking the child for a sick visit when the primary parent is at work, attending parent/teacher conferences, or assisting in recreational sports the child may be involved in). When situations like these arise, it may be wise to tell clients that their child could benefit from having another parental figure in their lives, especially one who wishes to take part in his or her daily activities. Often, however, when situations like these arise, the issues are much less tangential than the typical claim that “it’s my responsibility to go to the parent-teacher conferences,” and begin to morph into “my child was suspended from school and only my ex and their spouse knew” or “my child misbehaved and their stepparent spanked them without my consent.” When issues such as these arise, it can often be difficult to redraw the boundaries of the blended family. When confronted with issues such as these it may be wise to initially exclude any third parties from the negotiation and begin with only the two parents, so the biological caregivers can independently re-structure the boundaries needed for their significant others. Once this step has been completed, both parties can inform their spouse/spouses of what they have decided as far as boundaries are concerned, and usually include some form of consequential language (typically under Rule 5:3-7), which would hinder such actions from occurring again. While this particular solution is not always successful, it is one of many that practitioners have needed to contrive in order to help mend the fractured relationships sometimes caused by the overstepping of a non-biological parent. Despite the potential for pitfalls, overall the stepparent blended family structure seems to do a lot more good than harm for both a child and their biological parents. While there may be some animosity towards the stepparent as a result of a positive role they may play in their stepchild’s life, more often than not this animosity fades over time, as the parent who feels neglected eventually develops a deeper relationship through parenting time and discussions with their child(ren). In addition, a stepparent who has overstepped his or her bounds will eventually begin to conform to the biological parent’s wishes and begin to operate as part of a more cohesive family. In the event that this conformity does not happen, in this author’s experience, the blended stepfamily will more likely than not fail. While the first two family structures seem appealing to many due to the supplemental support given to the biological parents, there is one type of family that many people generally dismiss/overlook when they think of ways in which a family can be constructed. The single-parent household, one which is created subsequent to a divorce or death of a biological parent, while wrought with many potential complications and at all. However, in single-family households the individual caregiver is more often than not the one who would be dictating what religion will be practiced on a daily basis. Given this role, it can stand to reason that a parent who has obvious disadvantages since “children the ability to independently inculcate living with two married adults (biological or adoptive parents) have, in general, better health, greater access to health care, and fewer emotional or behavioral problems than children living in other types of families,”12 there are several positive aspects of this familial construct that could be of great benefit to the children and their primary parental unit. For example, many people tend to overlook in their analysis of the singleparent family construct the close bond forged between the parent and child(ren), as well as the freedom of the single parent to raise their child(ren) in the manner that they see fit, without any other parental intrusions. More importantly, in single-family households where the other parent has minimal involvement or is deceased, since the primary parent is the singular role model for their child(ren), the manner in which the parent acts and conducts themselves on a daily basis is more likely than not going to influence how that child conducts themselves and whose behavior they emulate as they progress through life. For example, when a parent begins to inculcate their child with their beliefs, traditions and morals, and that child has no other adult counterpart to provide the converse perspective, the child will more than likely emulate the behavior their main role model is displaying. For example, in many nuclear families one of the disputes that arise between two fit parents would be that of converse religious beliefs. This difference in theological views can undoubtedly cause confusion between the children, and in some cases dissuade children from truly practicing a religion their child(ren) with their theological views, would be in a much better position to convey their belief onto their children. While the ability to immerse one’s child with their core parenting beliefs could be a benefit to the single-family construct, this construct could also have some obvious pitfalls. The first of these drawbacks would be the financial strain that a single-parent family household usually faces just to provide for the necessities for everyone living under that roof. Many times, on top of the stress of trying to cover a home’s carrying costs, a single parent can find him or herself consistently being slighted by the non-custodial parent regarding support payments. Moreover, without the assistance of either a stepparent/live-in partner or an extended family member such as the above-noted psychological parent to help lift the burden, many single parents constantly find themselves filing enforcement applications just to get the mandated support to meet their family’s needs. Furthermore, especially when there is a support issue in play, when a single parent finds him or herself stuck between going to work or watching their child, many parents find themselves in a quandary. While the above financial burdens may not be present in all single-parent households, the above-noted pitfalls are a major reason for individuals not being able to escape the economic constraints associated with this familial construct. Future family formation boundaries and their yet-to-be-revealed composition is presently unknown and ever-changing. The current innumerable modern family arrangements were never imagined 50 years ago. With science progressing at breakneck speeds, and the rate of marriage and births on the decline, the possibilities for future family scenarios are limitless. Endnotes This article was originally published in the December 2018 issue of New Jersey Lawyer magazine, a publication of the New Jersey State Bar Association, and is reprinted here with permission.
https://ericbhannumlaw.com/family-formations-the-ever-changing-construct/
Why Do Companies Need the Creative Brief More Than Before? In order to have a clear vision about the new product development process, companies today are in greater need to write a creative brief. This is due to the nature of global markets in terms of infrastructure, teams, and competition. Companies are now able to connect through new communication methods that allow them to reach clients from the other side of the globe while their team is working from remote locations. Additionally, increasing competitiveness requires companies to clearly identify client requirements, document these requirements, and ensure that all the team is geared toward developing the product with the specification agreed on by both the company the client. In a previous a previous article, we discussed the creative brief definition and How to Create a Professional Creative Brief. A creative brief is a document that includes the necessary information about the project. It is shared between the stakeholders involved in the creative process. Its purpose is to ensure that all the required information about the project and client strategy is clearly defined and understood by all parties. Related articles: The length of the creative brief and its included content have proved controversial topics. While some companies write a long document with all the details, others prefer to have a minimal creative brief that can consist of a few short sentences that define the project strategy and to address other details during the team meetings. However, both these groups agree on the importance and the purpose of the creative brief. Download our Creative Brief Template for free personal and commercial usage. Why Do we Need a Creative Brief? As mentioned earlier, the new management models including outsourcing and offshore companies make it even more challenging for companies to build a clear vision about projects: especially when the company, teams, and clients are located in different places. Therefore, regardless of the length of the creative brief, it should achieve the following targets in order to be considered a successful and viable document to share with all the stakeholders: 1. Define The first purpose of the creative brief is to define information about the project. The information that needs to be defined in the brief includes the project goals, strategy, and characteristics. While some companies jump directly to defining the project characteristics and specification, it is important to define both the client intended goal and the strategy in order to allow the team to have a broader vision about the project before getting into the details. The goal of the project includes answering questions such as why this project is needed, and what the expected output of the project is. These goals should be supported by a clear strategy in order to create a clear vision for the team of what they are doing and why they are doing it in this particular way. 2. Connect The second purpose of writing the creative brief is to build a connection between all the stakeholders involved in the project including the client, departments, teams, and freelancers. The connection is based on the three main areas: the team, the ideas, and the efforts. The creative brief should provide a unified document that can be used by the different teams to guide them through the product development process. Hence, the document shouldn’t be written to target specific any particular department. It should use language that can be interpreted clearly by different teams and for different objectives including the design, marketing, and management sectors in the company. Another level of connection is the ideas. While these are not defined in the creative brief, the goals and specifications in the brief give the teams shared ideas about what ideas to adopt and how to implement them. The shared ideas come as a result of meetings that are held by the heads of departments on the one hand and the different teams on the other. The third connection that can be achieved through the creative brief is the efforts because the teams will have an idea about who is doing what. This knowledge increases the potential for collaboration and connecting activities to achieve more efficient integration between the teams. 3. Eliminate The third group of goals of the creative brief is eliminating conflict, time waste, and risk. As a result of the connection and integration between departments and the clear vision within each department, the chances of conflict between departments is decreased. With a clear brief about the project specifications, a conflict that could be caused by an unclear vision about each team’s role is eliminated, which provides a better chance for strong integration between departments. This integration helps to reduce the time waste that can occur when different roles are overlapping and prevents the need for teams to hold lots of meetings to follow up with each other progress and activities. Although the teams will still need to meet occasionally, the time wasted on making each team understand their role and how they should implement the requirements is reduced to a minimal level. Obviously, the risk can’t be eliminated completely even when a creative brief is written defining all the required information. However, the brief can help eliminate part of the risk that can occur due to misunderstanding. For example, if the team builds a prototype that doesn’t meet the client’s requirements, this failure is a waste of time, cost, and effort. Create Brief Recommended Books: - How To Write An Inspired Creative Brief: 2nd edition - An A-Z of Visual Ideas: How to Solve Any Creative Brief While there is an agreement between companies on the importance of the creative brief, there is an increasing need to build clear and helpful creative briefs in companies. This is because of increasing competition and adopting modern management models such as outsourcing and offshore business. A creative brief helps define not only the project specifications but also the project strategy and goals need to be accomplished. Also, the brief helps to create connections between teams, ideas, and efforts by providing a clear vision about each department’s role based on the brief’s specifications for the project. Additionally, the brief helps to eliminate conflict between stakeholders, time waste resulting from unclear information about the project, and to reduce the risk of moving in the wrong directions.
https://www.designorate.com/why-do-companies-need-a-creative-brief/
Please help me solve this! Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!* Q: Question 5.7 In a parallel flow heat exchanger, hot fluid enters the heat exchanger at a temperature... A: Click to see the answer Q: For the linkage shown, determine the force Q required for equilibrium when I = 18 in., M = 600 Ib.in... A: The data given is, The moment on the member, M = 600 Ib.in The angle of inclination, θ= 70° Length ... Q: 18-41. The spool has a mass of 20 kg and a radius of gyration of ko = 160 mm. If the 15-kg block A i... A: Given that: The mass of the spool m=20 kg. The radius of gyration is K0=160 mm The angular velocity... Q: Initially, the car travels along a straight road with a speed of 35 m/s. If the brakes are applied a... A: Given: The initial speed of the car is U=35 m/s The final speed of the car is V=10 m/s The time for ... Q: A 64-lb block is attached to a spring with a constant of k = 1 kip/ft and can move without friction ... A: Given Data: The weight of the block is: w= 64 lb. The value of the spring constant is: k=1 kip/ft. ... Q: *13-96. The spring-held follower AB has a mass of 0.5 kg and moves back and forth as its end rolls o... A: The displacement function is Differentiate the above equation with time, Again, differentiate the ... Q: Knowing that mB= 70 kg and mC = 25 kg, determine the magnitude of the force P required to maintain e... A: Given Mass MB = 70 kg MC = 25 kg To find Magnitude of force P Q: How can the displacement at specific points on the elastic curve be determined? A: Beam undergoes two types of deviation while it is bending i.e. linear and angular. The linear deviat... Q: For what purpose is a torsion test performed? A: The torsion test is used to determine the shear properties of the material. In this testing, the spe...
https://www.bartleby.com/questions-and-answers/figure-q2-shows-a-transmission-tower-of-height-70-m-which-are-supported-by-three-cables-ab-ac-and-ad/c226b89b-af3f-490e-93c7-9f1c42deed1c
Our first choice in the category of rare old illustrated books does not own its reputation to its content as much as it does to a simple fact that no one knows what it really is about. Ever since the fact of its existence became known to the general public, Voynich Manuscript has been a subject of controversy in scientific circles, puzzling those who tried to decipher it and inspiring various theories about its origin and purpose. As the story goes, a Polish antique book dealer after whom the manuscript will subsequently be named acquired it in 1912 during one of his quests for old and rare books that eventually took him to Villa Mondragone, a Jesuit College near Rome. Wilfrid Voynich immediately knew he had run into something special in a dusty chest full of old books; he was also the first one in a series of researchers who tried to find some meaning behind rows of unknown text and crudely drawn images mostly showing the scenes that have little to do with anything one might hope to find in a Jesuit monastery. However, the monks did not reveal to Voynich how the manuscript ended up in their hands. The fact that the manuscript text does not seem to be written in any known language has led to a generally accepted assumption that this is one of the first cases of text encryption in European history. The vellum parchment of the manuscript has been carbon-dated to 15th century, the time of the emergence of cryptography in Europe. Be that as it may, the fact remains that some of the world’s most renowned cryptoanalysts have tried their hand at deciphering this misterious text – only to find frustration and dead-ends. No known encryption pattern seems to be applying to Voynich Manuscript: some “words” appear simply too often, some are found only several times in the entire text, and there are even cases of the same word repeating three times in a row! After decades of futile deciphering attempts, opinions have emerged that the whole thing is just an elaborate and meaningless hoax thought of either by Voynich himself or by some Rennaisance scholar. However, it appears that in recent years some progress in code breaking has finally been made; we will return to that topic further below. Having been unable to decipher the text, scientists studied the drawings in Voynich Manuscript in an attempt to define its structure and purpose. Almost each one of a total of 240 pages has an accompanying illustration, and although few of them can be positively identified, there is an arrangement pattern which has enabled researchers to divide the book into sections dedicated to sciences. Hence there are Herbal, Astronomical, Biological, Cosmological and Pharmaceutical sections, as well as a section with continuous text and only star-like flowers in the margins which presumably contains recipies. Aside from this partition, however, the illustrations remain a mistery: most plant and animal species from their respective sections have not been identified, and the same holds true for diagrams and symbols from the Astronomical and Cosmological sections. Perhaps their meaning will become clearer once the text is decoded, provided that anyone succeeds in doing something that left the armies of cryptoanalysts discouraged and dumbfounded. With all the mystery surrounding Voynich Manuscript, it is no wonder that its author is also unknown. Upon purchasing it, Voynich found a letter inside which was vaguely suggesting that the authorship should be ascribed to 13th century polymath and Franciscan friar Roger Bacon. However, further investigations have failed to establish any connection between Bacon and the book. Other medieval and Renaissance scholars have also been proposed as the authors – among them, most notably and most recently, 15th century North Italian architect Antonio di Pietro Averlino – but to date there has been no conclusive evidence corroborating any of these hypotheses. In 2014, hopes were raised that the enigmatic language of Voynich Manuscript will finally be decoded. Stephen Bax, a professor of applied linguistics at the University of Bedfordshire, claims to have deciphered 14 characters of the manuscript’s alphabet and to being able to read several words of the text, namely those used for denoting juniper, coriander and hellebore, as well as the zodiac sign Taurus. His approach had to do with using medieval Arabic manuscripts as a starting point for comparisons, which, as he said, “led to some exciting results”. Still, Bax’s work might take years, if not decades, to complete. The purpose of Voynich Manuscript remains shrouded in secrecy. The most common assumption is that it is some sort of treatise of nature, having in mind that in the past sciences were much more intertwined and dependent on one another. However, until it has been positively decoded, it is certain that the outright oddity of the manuscript’s design will inspire many different interpretations.
https://theantiquarianbook.com/illustrated-books/voynich-manuscript/
--- abstract: 'We look at the Florida Lottery records of winners of prizes worth \$600 or more. Some individuals claimed large numbers of prizes. Were they lucky, or up to something? We distinguish the “plausibly lucky” from the “implausibly lucky” by solving optimization problems that take into account the particular games each gambler won, where plausibility is determined by finding the minimum expenditure so that if every Florida resident spent that much, the chance that any of them would win as often as the gambler did would still be less than one in a million. Dealing with dependent bets relies on the BKR inequality; solving the optimization problem numerically relies on the log-concavity of the regularized Beta function. Subsequent investigation by law enforcement confirmed that the gamblers we identified as “implausibly lucky” were indeed behaving illegally.' address: - 'Arratia: Department of Mathematics, University of Southern California, Department of Mathematics, 3620 S. Vermont Ave., KAP 104, Los Angeles, CA 90089-2532' - 'Garibaldi: Institute for Pure and Applied Mathematics, UCLA, 460 Portola Plaza, Box 957121, Los Angeles, California 90095-7121' - 'Mower: Palm Beach Post, 2751 S Dixie Highway, West Palm Beach, FL 33405' - 'Stark: Department of Statistics, \#3860, University of California, Berkeley, CA 94720-3860' author: - Richard Arratia - Skip Garibaldi - Lawrence Mower - 'Philip B. Stark' bibliography: - 'lottery.bib' title: Some people have all the luck --- It is unusual to win a lottery prize worth \$600 or more. No one we know has. But ten people have each won more than 80 such prizes in the Florida Lottery. This seems fishy. Someone might get lucky and win the Mega Millions jackpot (a 1-in-259 million chance) having bought just one ticket. But it’s implausible that a gambler would win many unlikely prizes without having bet very many times. How many? We pose an optimization problem whose answer gives a lower bound on any sensible estimate of an alleged gambler’s spending: over all possible combinations of Florida Lottery bets, what is the minimum amount spent so that, if *every* Florida resident spent that much, the chance that *any* of them would win so many times is still less than one in a million? If that amount is implausibly large compared to that gambler’s means, we have statistical evidence that she is up to something. Solving this optimization problem in practice hinges on two math facts: - an inequality that lets us bound the probability of winning dependent bets in some situations in which we do not know precisely which bets were made. - log-concavity of the regularized Beta function, which lets us show that any local minimizer attains the global minimal value. We conclude that 2 of the 10 suspicious gamblers could just be lucky. The other 8 are chiseling or spending implausibly large sums on lottery tickets. These results were used by one of us (LM) to focus on-the-ground investigations and to support an exposé of lax security in the Florida lottery [@PBP]. We describe what those investigations found, and the policy consequences in Florida and other states. How long can a gambler gamble? ============================== Is there a non-negligible probability that a pathological gambler of moderate means could win many \$600+ prizes? If not, we are done: our suspicion of these 10 gamblers is justified. So, suppose a gambler starts with a bankroll of $S_0$ and buys a single kind of lottery ticket over and over again. If he spends his initial bankroll and all his winnings, how much would he expect to spend in total and how many prizes would he expect to collect before going broke? Let the random variable $X$ denote the value of a ticket, payoff minus cost. We assume that $$\label{ruin.ass1} {\mathbb{E}}(X) < 0,$$ because that is the situation in the games where our suspicious winners claimed prizes. (It does infrequently happen that lottery tickets can have positive expectation, see [@GroteMatheson] or [@Finding].) Assumption and the Law of Large Numbers say that a gambler with a finite bankroll eventually will run out of money, with probability 1. The question is: *how fast?* Write $c > 0$ for the cost of the ticket, so that $$\label{ruin.ass2} {\mathbb{P}}(X \ge -c) = 1 {\quad\text{and}\quad}{\mathbb{P}}(X = -c) \ne 0.$$ To illustrate our assumptions and notation, let’s look at a concrete example of a Florida game, Play 4. It is based on the *numbers* or *policy* game formerly offered by organized crime, described in [@Numbers] and [@Sellin]. Variations on it are offered in most states that have a lottery. \[play4.eg\] Our ten gamblers claimed many prizes in Florida’s Play 4 game, although in 2012 it only accounted for about 6% of the Florida Lottery’s \$4.45 billion in sales. Here are the rules, simplified in ways that don’t change the probabilities. The Lottery draws a 4-digit random number twice a day. A gambler can bet on the next drawing by paying $c = \$1$ for a ticket, picking a 4-digit number, and choosing “straight” or “box.” If the gambler bets “straight,” she wins \$5000 if her number matches the next 4-digit number exactly (which has probability $p = 10^{-4}$). She wins nothing otherwise. The expected value of a straight ticket is ${\mathbb{E}}(X) = \$5000 \times 10^{-4} - \$1 = -\$0.50$. If a gambler bets “box,” she wins if her number is a permutation of the digits in the next 4-digit number the Lottery draws. She wins nothing otherwise. The probability of winning this bet depends on the number of distinguishable permutations of the digits the gambler selects. For instance, if the gambler bets on 1112, there are 4 possible permutations, 1112, 1121, 1211, and 2111. This bet is a “4-way box.” It wins \$1198 with probability $1/2500 = 4 \times 10^{-4}$, since 4 of the 10,000 equally likely outcomes are permutations of those four digits. If the gambler bets on 1122, there are 6 possible permutations of the digits; this bet is called a “6-way box.” It wins \$800 with probability $6 \times 10^{-4}$. (The 6-way box is relatively unpopular, accounting for less than 1% of Play 4 tickets.) Buying such a ticket has expected value ${\mathbb{E}}(X) \approx -\$0.52$. Similarly, there are 12-way and 24-way boxes. Returning to the abstract setting, the gambler’s bankroll after $t$ bets is $$S_t := S_0 + X_1 + X_2 + \cdots + X_t,$$ where $X_1, \ldots, X_t$ are i.i.d. random variables with the same distribution as $X$, and $X_i$ is the net payoff of the $i$-th ticket. The gambler can no longer afford to keep buying tickets after the $T$th one, where $T$ is the smallest $t \ge 0$ for which $S_t < c$. \[ruin.2\] In the notation of the preceding paragraph, $$\frac{S_0 - c}{|{\mathbb{E}}(X)|} < {\mathbb{E}}(T) \le \frac{S_0}{|{\mathbb{E}}(X)|},$$ with equality on the right if $S_0$ and all possible values of $X$ are integer multiples of $c$. In most situations, $S_0$ is much larger than $c$, and the two bounds are almost identical. In expectation, the gambler spends a total of $c{\mathbb{E}}(T)$ on tickets, including all of his winnings, which amount to $c{\mathbb{E}}(T) - S_0$. By the definition of $T$ and , $$\label{ruin.eq} 0 \le {\mathbb{E}}(S_T) < c$$ with equality on the left in case $S_0$ and $X$ are integer multiples of $c$. Now the crux is to relate ${\mathbb{E}}(T)$ to ${\mathbb{E}}(S_T)$. If $T$ were constant (instead of random), then $T = {\mathbb{E}}T$ and we could simply write $$\label{wald.eq} {\mathbb{E}}(S_T) = {\mathbb{E}}(S_0 + \sum_{i = 1}^{{\mathbb{E}}T} X_i) = S_0 + {\mathbb{E}}(T)\, {\mathbb{E}}(X)$$ and combining this with would give the claim. The key is that equation holds even though $T$ is random — this is Wald’s Equation (see, e.g., [@Durrett:EOSP §5.4]). The essential property is that $T$ is a *stopping time*, i.e., for every $k > 0$, whether or not one places a $k$-th bet is determined just from the outcomes of the first $k - 1$ bets. You might recognize that in this discussion that we are considering a version of the gambler’s ruin problem but with an unfair bet and where the house has infinite money; for bounds on gambler’s ruin without these hypotheses, see, e.g., [@Ethier]. A ticket with just one prize {#a-ticket-with-just-one-prize .unnumbered} ---------------------------- The proposition lets us address the question from the beginning of this section. Suppose a ticket pays $j$ with probability $p$ and nothing otherwise; the expected value of the ticket, ${\mathbb{E}}(X) = pj - c$, is negative; and $j$ is an integer multiple of $c$. If a gambler starts with a bankroll of $S_0$ and spends it all on tickets, successively using the winnings to buy more tickets, then by Proposition \[ruin.2\] the gambler should expect to buy ${\mathbb{E}}(T) = S_0/(c - pj)$ tickets, which means winning $$\frac{c{\mathbb{E}}(T) - S_0}{j} = \frac{pS_0}{c-pj}.$$ prizes. \[badger.eg\] How many prizes might a compulsive gambler of “ordinary” means claim? Surely some gamblers have lost houses, so let us say he starts with a bankroll worth $S_0 = \$$175,000, an amount between the median list price and the median sale price of a house in Florida [@Zillow]. If he always buys Play 4 6-way box tickets and recycles his winnings to buy more tickets, the previous paragraph shows that he can expect to win about $$pS_0/(c-pj) = 6 \times 17.5 / 0.52 \approx \text{202 times.}$$ This is big enough to put him among the top handful of winners in the history of the Florida lottery. Hence, the number of wins alone does not give evidence that a gambler cheated. We must take into account the particulars of the winning bets. A toy version of the problem ============================ From here on, a “win” means a win large enough to be recorded; for Florida, the threshold is \$600. Suppose for the moment that a gambler only buys one kind of lottery ticket, and that each ticket is for a different drawing, so that wins are independent. Suppose each ticket has probability $p$ of winning. A gambler who buys $n$ tickets spends $cn$ and, on average, wins $np$ times. This is intuitively obvious, and follows formally by modeling a lottery bet as a Bernoulli trial with probability $p$ of success: in $n$ trials we expect $np$ successes. We don’t know $n$, and the gambler is unlikely to tell us. But based on the calculation in the preceding paragraph, we might guess that a gambler who won $W$ times bought roughly $W/p$ tickets. Indeed, an unbiased estimate for $n$ is ${\hat{n}}:= W/p$, corresponding to the gambler spending $c{\hat{n}}$ on tickets. Since $p$ is very small, like $10^{-4}$, the number ${\hat{n}}$ is big—and so is the estimated amount spent, $c{\hat{n}}$. (Note that this estimate includes any winnings “reinvested” in more lottery tickets.) A gambler confronted with ${\hat{n}}$ might quite reasonably object that she is just very lucky, and that the true number of tickets she bought, $n$, is much smaller. Under the assumptions in this section, her tickets are i.i.d. (independent, identically distributed) Bernoulli trials, and the number of wins $W$ has a binomial distribution with parameters $n$ and $p$, which lets us check the plausibility of her claim by considering $$\label{prob.bin} D(n; w, p) :=\fbox{\parbox{1.4in}{probability of at least $w$ wins with $n$ tickets}} = \sum_{k=w}^n \binom{n}{k} p^k (1-p)^{n-k}.$$ Modeling a lottery bet as a Bernoulli trial is precisely correct in the case of games like Play 4. But for scratcher games, there is a very large pool from which the gambler is sampling without replacement by buying tickets; as the pool is much larger than the values of $n$ that we will consider, the difference between drawing tickets with and without replacement is negligible. \[forward.eg\] Of the 10 people who had won more than 80 prizes each in the Florida Lottery, the second most-frequent prize claimant was Louis Johnson. He claimed $W = 57$ \$5,000 prizes from straight Play 4 tickets (as well as many prizes in many other games that we ignore in this example). We estimate that he bought ${\hat{n}}= W/p = 570,000$ tickets at a cost of \$570,000. What if he claimed to only have bought $n =$ 175,000 tickets? The probability of winning at least $57$ times with 175,000 tickets is $$D(175000; 57, 10^{-4}) \approx 6.3 \times 10^{-14}.$$ For comparison, by one estimate there are about 400 billion stars in our galaxy [@Milky]. Suppose there were a list of all those stars, and two people independently pick a star at random from that list. The chance they would pick the same star is minuscule, yet it is still 40 times greater than the probability we just calculated. It is utterly implausible that a gambler wins 57 times by buying 175,000 or fewer tickets. What this has to do with Joe DiMaggio {#DiMaggio.sec} ===================================== The computation in Example \[forward.eg\] does not directly answer whether Louis Johnson is lucky or up to something shady. The most glaring problem is that we have calculated the probability that a *particular* innocent gambler who buys \$175,000 of Play 4 tickets would win so many times. The news media have publicized some lottery coincidences as astronomically unlikely, yet these coincidences have turned out to be relatively unsurprising given the enormous number of people playing the lottery; see, for example, [@DiaconisMosteller esp. p. 859] or [@Stefanski] and the references therein. Among other things, we need to check whether so many people are playing Play 4 so frequently that it’s reasonably likely at least one of them would win at least 57 times. If so, Louis Johnson might be that person, just like with Mega Millions: no particular ticket has a big chance of winning, but if there are enough gamblers, there is a big chance *someone* wins. We take an approach similar to how baseball probability enthusiasts attempt to answer the question, *Precisely how amazing was Joe DiMaggio?* Joe DiMaggio is famous for having the longest hitting streak in baseball: he hit in 56 consecutive games in 1941. (The modern player with the second longest hitting streak is Pete Rose, who hit in 44 consecutive games in 1978.) One way to frame the question is to consider the probability that a randomly selected player gets a hit in a game, and then estimate the probability that there is at least one hitting streak at least 56 games long in the entire history of baseball. If a streak of 56 or more games is likely, then the answer to the question is “not so amazing”; DiMaggio just happened to be the person who had the unsurprisingly long streak. If it is very unlikely that there would be such a long streak, then the answer is: DiMaggio was truly amazing. (The conclusions in DiMaggio’s case have been equivocal, see the discussion in [@ProbTales pp. 30–38].) Let’s apply this reasoning to Louis Johnson’s 57 Play 4 wins (Example \[forward.eg\]). Suppose that $N$ gamblers bought Play 4 tickets during the relevant time period, each of whom spent at most \$175,000. Then an upper bound on the probability that at least one such gambler would win at least 57 times is the chance of at least one success in $N$ Bernoulli trials, each of which has probability no larger than $p \approx 6.3 \times 10^{-14}$ of success. (Louis Johnson represents a success.) The trials might not be independent, because different gamblers might bet on the same numbers for the same game, but the chance that at least one of the $N$ gamblers wins at least 57 times is at most $Np$ by the Bonferroni bound (for any set of events $A_1, \ldots, A_N$, ${\mathbb{P}}(\cup_{i=1}^N A_i) \le \sum_{i=1}^N {\mathbb{P}}(A_i)$). What is $N$? Suppose it’s the current population of Florida, approximately 19 million. Then the chance at least one person would win at least 57 times is no larger than $19 \times 10^{6} \times 6.3 \times 10^{-14} = 0.0000012$, just over one in a million. This estimate is crude because the estimated number of gamblers is very rough and of course the estimate is not at all sharp (it gives a lot away in the direction of making the gambler look less suspicious) because most people spend nowhere near \$175,000 on the lottery. We are giving even more away because Louis Johnson won many other bets (his total winnings are, of course, dwarfed by the expected cash outlay). Considering all these factors, one might reasonably conclude that either Louis Johnson has a source of hidden of money—perhaps he is a wealthy heir with a gambling problem—or he is up to something. \[reverse.eg\] In Example \[forward.eg\] we picked the \$175,000 spending level almost out of thin air, based on Florida house prices as in Example \[badger.eg\]. Instead of starting with a limit on spending and deducing the probability of a number of wins, let’s start with a probability, ${\varepsilon}= 5 \times 10^{-14}$, and infer the minimum spending required to have at least that probability of so many wins. If Johnson buys $n$ tickets, then he wins at least 57 times with probability $D(n; 57, 10^{-4})$. We compute $n_0$, the smallest $n$ such that $$D(n; 57, 10^{-4}) \ge {\varepsilon},$$ which gives $n_0 =$ 174,000. Using the Bonferroni bound again, we find that the probability, if *everyone* in Florida spent \$174,000 on straight Play 4 tickets, the chance that *any* of them would win 57 times or more is less than one in a million. Multiple kinds of tickets ========================= Real lottery gamblers tend to wager on a variety of games with different odds of winning and different payoffs. Suppose they place $b$ different kinds of bets. (It might feel more natural to say “games,” but a gambler could place several dependent bets on a single Play 4 drawing: straight, several boxes, etc.) Number the bets $1, 2, \ldots, b$. Bet $i$ costs $c_i$ dollars and has probability $p_i$ of winning. The gambler won more than the threshold on bet $i$ $w_i$ times. We don’t know $n_i$, the number of times the gambler wagered on bet $i$. If we did know the vector ${\vec{n}}= (n_1, n_2, \ldots, n_b)$, then we might be able to calculate the probability: $$\label{Pdef.1} P({\vec{n}}; \vec{w}, \vec{p}) := \left({\parbox{3in}{probability of winning at least $w_i$ times on bet $i$ with $n_i$ tickets, for all $i$}} \right).$$ As in Example \[reverse.eg\], we can find a lower bound on the amount spent to attain $w_i$ wins on bet $i$, $i = 1, \ldots, b$, by solving $$\label{premother} {\vec{c}}\cdot {{\vec{n}^*}}= \min_{{\vec{n}}} {\vec{c}}\cdot {\vec{n}}\quad \text{s.t.}\quad \ n_i \ge w_i {\quad\text{and}\quad}P({\vec{n}}; \vec{w}, \vec{p}) \ge {{\varepsilon}}.$$ For a typical gambler that we study, this lower bound ${\vec{c}}\cdot {{\vec{n}^*}}$ will be in the millions of dollars. Thinking back to the “Joe DiMaggio” justification for why is a lower bound, it is clear that not every resident of Florida would spend so much on lottery tickets, and our gut feeling is that a more refined justification would produce a larger lower bound for the amount spent. But how can we find $P({\vec{n}}; \vec{w}, \vec{p})$? If the different bets were on independent events (say, each bet is a different kind of scratcher ticket), then $$\label{Peq.1} P({\vec{n}}; \vec{w}, \vec{p}) = \prod_{i=1}^b \left({\parbox{2in}{probability of winning at least $w_i$ times on bet $i$ with $n_i$ tickets}} \right) = \prod_{i=1}^b D(n_i; w_i, p_i).$$ But gamblers can make dependent bets, in which case does not hold. Fortunately, it is possible to derive an upper bound for the typical case, as we now show. No dependent wins is almost as good as independent bets {#sect bkr} ======================================================= For most of the 10 gamblers, we did not observe wins on dependent bets, such as a win on a straight ticket and a win on a 4-way box ticket for the same Play 4 drawing. We seek to prove Proposition \[cond\] (below), which says that if there were no wins on dependent bets, then treating the bets as if they were independent gives an overall upper bound on the probability $P$ in . Abstractly, we envision a finite number $d$ of independent drawings, such as a sequence of Play 4 drawings. For each drawing $j$, $j = 1, \ldots, d$, the gambler may bet any amount on any of $b$ different bets (such as 1234 straight, 1344 6-way box, etc.), whose outcomes—for drawing $j$—may be dependent, but whose outcomes on different draws are independent. We write $p_i$ for the probability that a bet on $i$ wins in any particular drawing; $p_i$ is the same for all drawings $j$. For $i=1, \ldots, b$ and $j = 1, \ldots, d$, let $n_{ij} \in \{0,1\}$ be the indicator that the gambler wagered on bet $i$ in drawing $j$, so that $i$th row sum, $n_i := \sum_j n_{ij}$, is the total number of bets on $i$. We call the entire system of bets $B$, represented by the $b$-by-$d$ zero-one matrix $B=[n_{ij}]$. \[cond\] Suppose that, for each $i$, a gambler wagers on bet $i$ in $n_i$ different drawings, as specified by $B$, above. Given the bets $B$, consider the events $$W_i := (\text{gambler wins bet $i$ at least $w_i$ times with bets $B$}),$$ and the event $$I:= (\text{in each drawing $j$, the gambler wins at most one bet}).$$ Then $$\label{cond.1} {\mathbb{P}}(I \cap W_1 \cap \cdots \cap W_b) \le \prod_{i=1}^b {\mathbb{P}}(W_i).$$ In our case, ${\mathbb{P}}(W_i) = D(n_i; w_i, p_i)$, so we restate as: $$\label{cond.2} {\mathbb{P}}(I \cap W_1 \cap \cdots \cap W_b) \le \prod_{i=1}^b D(n_i; w_i, p_i).$$ Proposition \[cond\] is intuitively plausible: even though the bets are not independent, the drawings are, and event $I$ guarantees that any single drawing helps at most one of the events $\{W_i\}$ to occur. We prove Proposition \[cond\] as a corollary of an extension of a celebrated result, the BKR inequality, named for van den Berg–Kesten–Reimer, conjectured in [@BK], and proved in [@Reimer] and [@BFiebig] (or see [@CPS]). The remainder of this section provides the details. The original BKR inequality is stated as Theorem \[bkr thm\]. We separate the purely set-theoretic aspects of the discussion, in Section \[bkr.def\] and \[sect bkr set\], from the probabilistic aspects, in Section \[sect bkr prob\]. The BKR operation ${\oblong}$ {#bkr.def} ----------------------------- Let $S$ be an arbitrary set, and write $S^d$ for the Cartesian product of $d$ copies of $S$. Since our application is probability, we call an element $\omega = (\omega_1, \ldots, \omega_d) \in S^d$ an *outcome*, and we call any $A \subseteq S^d$ an *event*. For a subset $J \subseteq \{ 1, \ldots, d \}$ and an outcome $\omega \in S^d$, the *$J$-cylinder of $\omega$*, denoted $\operatorname{Cyl}(J, \omega)$, is the collection of $\omega' \in S^d$ such that $\omega'_j = \omega_j$ for all $j \in J$. For events $A_1, A_2, \ldots, A_b$, let $A_1 {\oblong}A_2 {\oblong}\cdots {\oblong}A_b \subseteq S^d$ be the set of $\omega$ for which there exist pairwise disjoint $J_1, J_2, \ldots, J_b \subseteq \{ 1, \cdots d \}$ such that $\operatorname{Cyl}(J_i, \omega) \subseteq A_i$ for all $i$. The case $b = 2$, where one combines just two events, is the context for the original BKR inequality as in [@BK p. 564]; the operation with $b > 2$ is new and is the main study of this section. Here is another definition of ${\oblong}$ that might be more transparent. Given an event $A \subseteq S^d$ and a subset $J \subseteq \{ 1, \ldots, d \}$, define the event $$[A]_J := \{ \omega \in A \mid \operatorname{Cyl}(J,\omega)\subseteq A \} = \bigcup\nolimits_{\{\omega \mid \operatorname{Cyl}(J, \omega) \subseteq A\}} \operatorname{Cyl}(J, \omega).$$ Informally, $[A]_J$ consists of the outcomes in $A$, such that by looking only at the coordinates indexed by $J$, one can tell that $A$ must have occurred. Evidently, for $A, B \subseteq S^d$, $$\label{monotone} A \subseteq B \text{ implies } A_J \subseteq B_J \quad \text{and} \quad J \subseteq K \text{ implies } A_J \subseteq A_K.$$ The definition of ${\oblong}$ becomes: $$\label{def bkr many} {\bigbox}_{1 \le i \le b} A_i := \bigcup_{\text{pairwise disjoint $J_1,\ldots, J_b \subseteq \{ 1, \ldots, d \}$}} [A_1]_{J_1} \cap [A_2]_{J_2} \cap \cdots \cap [A_r]_{J_b}.$$ We read the above definition as “${\bigbox}_{1 \le i \le b} A_i$ is the event that all $b$ events occur, with $b$ disjoint sets of reasons to simultaneously certify the $b$ events.” Informally, the outcome $\omega$, observed only on the coordinate indices in $J_i$, supplies the “reason” that we can certify that event $A_i$ occurs. Our notation ${\bigbox}_{1 \le i \le b} A_i \equiv A_1 {\oblong}A_2 {\oblong}\cdots {\oblong}A_b$ is intentionally analogous to the notations for set intersection, $\bigcap_{1 \le i \le b} A_i \equiv A_1 \cap A_2 \cap \cdots \cap A_b$, and set union, $\bigcup_{1 \le i \le b} A_i \equiv A_1 \cup A_2 \cup \cdots \cup A_b$. The multi-input operator ${\bigbox}$ is, like set intersection $\bigcap$ and set union $\bigcup$, fully commutative, i.e., unchanged by any re-ordering of the inputs. Unlike intersection and union, ${\oblong}$ is not associative, as we now show. \[skip example\] Take $S=\{0,1\}$, $d=3$, and $$A = (0,*,*) \cup (1,0,*), \quad B = (0, *, *) \cup (1, 1, *), \quad C = (*,0,1),$$ where we write for example $(1, 0, *) = \{ (1, 0, 0), (1, 0, 1) \}= \operatorname{Cyl}(\{1, 2 \}, (1, 0, s)) $ for $s = 0, 1$ and $(0,*,*)= \{ (0,0,0), (0,0,1), (0,1,0), (0,1,1) \}$. Note that $|A|=|B|=6$. Then $A {\oblong}B = (0,*, *)$, $(A {\oblong}B) {\oblong}C = \{ (0,0,1) \}$ — using $J_1 = \{1\}$ and $J_2 = \{2,3\}$ in — but $B {\oblong}C = \{ (0,0,1) \}$ and $A {\oblong}(B {\oblong}C) = \emptyset$. Also, $A {\oblong}B {\oblong}C = \emptyset$. The connection between lottery drawings and ${\oblong}$ {#bkr.connect} ------------------------------------------------------- Before continuing to discuss the BKR operation ${\oblong}$ in the abstract, we consider what it means for lottery drawings. We take $S = 2^b$ to encode the results of a single draw: an element $s \in S$ answers, for each of the $b$ bets, whether that bet wins or not. The sample space for our probability model is $S^d$; the $j$-th coordinate $\omega_j$ reports the results of the $b$ bets on the $j$-th draw. It is easy to see that, in the notation of Proposition \[cond\], $$\label{contained} \left( I \cap W_1 \cap \cdots \cap W_b \right) \subseteq {\bigbox}_1^b W_i.$$ Indeed, given an outcome $\omega \in I \cap W_1 \cap \cdots \cap W_b$, we can take, for $i=1$ to $b$, $J_i := \{j \mid \text{on draw $j$, bet $i$ wins \emph{and} $n_{ij}=1$} \}$. Since $\omega \in I$, the sets $J_1,\ldots,J_b$ are mutually disjoint; and since $\omega \in W_i$, $|J_i| \ge n_i$. Hence, $\operatorname{Cyl}(J_i,\omega) \subseteq W_i$, and thus $\omega \in [W_i]_{J_i}$, for $i=1$ to $b$. The left hand side of can be a strict subset of the right hand side. For example, with $b=2$ bets and $d = 2$ draws, suppose that $w_1 = w_2 = 1$ and the gambler lays both bets on both draws. The outcome where both bets win on both draws is not in the left side of but is in $W_1 {\oblong}W_2$. To write this example out fully, we think of the binary encoding, $S=\{0,1,2,3\}$ corresponding to $\{00,01,10,11\}$, so that, for example, $0\in S$ represents a draw where both bets lose, $1 \in S$ represents the outcome 01 where the first bet loses and the second bet wins, $2 \in S$ represents the outcome 10 where the first bet wins and the second bet loses, and $3 \in S$ represents the outcome 11 where both bets win. The event $I$ is the set of $\omega= (\omega_1, \omega_2)$ for which no coordinate $\omega_j$ is equal to 3. The event $W_1$ is the set of $\omega$ such that at least one of the coordinates is equal to 1 or 3, and the event $W_2$ is the set of $\omega$ such that at least one of the coordinates is equal to 2 or 3. Certainly, $$I \cap W_1 \cap W_2 = \{ (1, 2), (2, 1) \},$$ yet $$W_1 {\oblong}W_2 = \{ (1, 2), (2, 1), (1, 3), (2, 3), (3, 1), (3, 2), (3, 3) \}.$$ Set theoretic considerations related to the BKR inequality {#sect bkr set} ---------------------------------------------------------- It is obvious that, for events $B_1, \ldots, B_r \subseteq S^d$ and $J \subseteq \{1,\cdots,d\}$, $$\label{capcup} \left[\bigcap\nolimits_{1 \le i \le r} B_i\right]_J = \bigcap_{1 \le i \le r} [B_i]_J \quad \text{and} \quad \left[\bigcup\nolimits_{1 \le i \le r} B_i\right]_J \supseteq \bigcup_{1 \le i \le r} [B_i]_J$$ For unions, the containment may be strict, as in Example \[skip example\], where $A \cup B = S^d$ hence $[A \cup B]_\emptyset = S^d$, whereas $[A]_\emptyset = [B]_\emptyset = \emptyset$. \[cylcomp\] For $A \subseteq S^d$ and $J,K \subseteq \{1,\cdots,d\}$, $$[[A]_J]_K = [A]_{J \cap K}.$$ Suppose first that $\omega \in [[A]_J]_K$. That is, $\operatorname{Cyl}(K, \omega) \subseteq A_J$: if $\omega'' \in S^d$ agrees with $\omega$ on $K$, then $\operatorname{Cyl}(J, \omega'') \subseteq A$. We must show that $\omega$ is in $A_{J \cap K}$; i.e., if $\omega''$ is in $\operatorname{Cyl}(J \cap K, \omega)$, then $\omega''$ is in $A$. Given $\omega'' \in \operatorname{Cyl}(J \cap K, \omega)$, pick $\omega'$ to agree with $\omega$ on $K$ and $\omega''$ on $S^d \setminus K$. Then $\omega'$ agrees with $\omega''$ on $(S^d \setminus K) \cup (J \cap K)$, so on $J$, i.e., $\omega'' \in A$, proving $\subseteq$. We omit the proof of the containment $\supseteq$, which is easier. \[bkr.incl\] For $A_1, A_2, \ldots, A_b \subseteq S^d$, we have: $${\bigbox}_1^b A_i \subseteq (((\cdots ((A_1 {\oblong}A_2) {\oblong}A_3) \cdots {\oblong}A_b.$$ By induction, using , it suffices to prove that $$\left( {\bigbox}_1^b A_i \right) \subseteq \left( {\bigbox}_1^{b-1} A_i \right) {\oblong}A_b.$$ With unions over $K \subseteq \{ 1, \ldots, d \}$ and pairwise disjoint $J_1, J_2, \ldots$, $$\begin{aligned} \left( {\bigbox}_1^{b-1} A_i \right) {\oblong}A_b & = & \bigcup_K \left( \left[ {\bigbox}_{i=1}^{b-1} A_i \right]_{K} \cap [A_b]_{K^c} \right) \label{line 1}\\ & = & \bigcup_K \left( \left[ \bigcup_{J_1, \ldots, J_{b-1}} \bigcap_{i=1}^{b-1} [A_i]_{J_i} \right]_{K} \cap [A_b]_{K^c} \right) \label{line 2} \\ & \supseteq & \bigcup_K \left( \left( \bigcup_{J_1, \ldots, J_{b-1}} \bigcap_{i=1}^{b-1} [[A_i]_{J_i}]_K \right) \cap [A_b]_{K^c} \right) \label{line 3} \\ & = & \bigcup_K \left( \left( \bigcup_{J_1, \ldots, J_{b-1}} \bigcap_{i=1}^{b-1} [A_i]_{J_i\cap K} \right) \cap [A_b]_{K^c} \right) \label{line 4} \\ & = & \bigcup_{J_1, \ldots, J_b} \bigcap_{i=1}^b [A_i]_{J_i} = {\bigbox}_1^b A_i \label{line 5}\end{aligned}$$ The justifications are as follows. Line is the definition, where $K^c$ denotes the complement of $K$. Line follows by using the definition . The set inclusion in line results from applying both parts of . Line follows by applying Lemma \[cylcomp\] on the composition of cylinder operators. Line is just re-labeling the indices: the previous line is a union, indexed by pairwise disjoint $J_1, \ldots, J_b$, and a set $K$; for $i=1$ to $b-1$, $K_i= J_i \cap K$, and for index $b$, we take $K_b = K^c$—the set of possible indices $\alpha = (J_1 \cap K, \ldots, J_{b-1}\cap K,K^c)$ is identical to the set of $\alpha=(K_1, \ldots, K_b)$, with $i \ne j$ implies $K_i \cap K_j = \emptyset$—and then we switch notation back, from $K_i$’s to $J_i$’s. Probability considerations related to the BKR inequality {#sect bkr prob} -------------------------------------------------------- References to the BKR inequality were given just after Equation . \[bkr thm\] Let $S$ be a finite set, and let ${\mathbb{P}}$ be a probability measure on $S^d$ for which the $d$ coordinates are mutually independent. (The coordinates might have different distributions.) For any events $A,B \subseteq S^d$, with the event $A {\oblong}B$ as defined by , $${\mathbb{P}}(A {\oblong}B) \le {\mathbb{P}}(A) {\mathbb{P}}(B).$$ \[ind.bkr\] Under the hypotheses of Theorem \[bkr thm\], for $ b = 2, 3, \ldots$ and $A_1, \ldots, A_b \subseteq S^d$, $$\label{b inequality} {\mathbb{P}}(A_1 {\oblong}A_2 {\oblong}\cdots {\oblong}A_b) \le \prod_{i=1}^b {\mathbb{P}}(A_i).$$ For $b = 2$, is the original BKR inequality. For $b \ge 3$, we apply Proposition \[bkr.incl\] to see that $${\mathbb{P}}(A_1 {\oblong}\cdots {\oblong}A_b) \le {\mathbb{P}}((((\cdots ((A_1 {\oblong}A_2) {\oblong}A_3) \cdots {\oblong}A_b).$$ Applying the $b = 2$ case and induction provides the claim. We can now prove Proposition \[cond\], which from our new perspective is a simple corollary of the extended BKR inequality, Corollary \[ind.bkr\]. In view of the containment , we have: $${\mathbb{P}}(I \cap W_1 \cap \cdots \cap W_b) \le {\mathbb{P}}\left( {\bigbox}_1^b W_i \right),$$ and by Corollary \[ind.bkr\] $${\mathbb{P}}\left( {\bigbox}_1^b W_i \right) \le \prod {\mathbb{P}}(W_i). \qedhere$$ The optimization problem we actually solve ========================================== In order to exploit the material in the previous section, we *replace* definition of $P$ with $$P({\vec{n}}; \vec{w}, \vec{p}) := \left(\parbox{3in}{{probability of winning at least $w_i$ times on bet $i$ with $n_i$ tickets, for all $i$,} and no wins on dependent bets} \right);$$ from Proposition \[cond\], we know that then $$\label{Peq.2} P({\vec{n}}; \vec{w}, \vec{p}) \le \prod_{i=1}^b D(n_i; w_i, p_i).$$ We will find a lower bound ${\vec{c}}\cdot {{\vec{n}^*}}$ on the amount spent by a gambler who did not win dependent bets by solving not , but rather $$\label{mother} {\vec{c}}\cdot {{\vec{n}^*}}= \min_{{\vec{n}}} {\vec{c}}\cdot {\vec{n}}\quad \text{s.t.}\quad \ n_i \ge w_i {\quad\text{and}\quad}\prod_{i=1}^b D(n_i; w_i, p_i) \ge {{\varepsilon}}.$$ We furthermore relax the requirement that the numbers of bets, the $n_i$’s, be integers and we extend the domain of $D$ to include positive real values of $n_i$ as in [@BarlowProschan p. 945, 26.5.24]: $$\label{prob} D(n; w, p) = I_p(w, n-w+1), \quad \text{where} \quad I_x(a, b) := \frac{\int_0^x t^{a-1} (1-t)^{b-1} \, {\mathrm{d}t}}{\int_0^1 t^{a-1} (1-t)^{b-1} \, {\mathrm{d}t}}$$ is the *regularized Beta function*. The function $I_x$, or at least its numerator and denominator, are available in many scientific computing packages, including Python’s SciPy library. Extending the domain of the optimization problem to non-integral $n_i$ can only decrease the lower bound ${\vec{c}}\cdot {{\vec{n}^*}}$, and it brings two benefits, which we now describe. In our examples, $\prod D(w_i; w_i, p_i)$ is much less than ${{\varepsilon}}$, and consequently $n_i^* > w_i$ for some $i$. As $D(n; w, p)$ is monotonically increasing in $n$, we have an *equality* $\prod D(n_i^*; w_i, p_i) = {{\varepsilon}}$. This is the first benefit, and it implies by an inequality $P({{\vec{n}^*}}; \vec{w}, \vec{p}) \le {{\varepsilon}}$. Therefore, as in §\[DiMaggio.sec\], if all $N$ people in the gambling population spent at least ${\vec{c}}\cdot {{\vec{n}^*}}$ on tickets, the probability that one or more of the gamblers would win at least $w_i$ times on bet $i$ for all $i$ is at most $N {{\varepsilon}}$. To say it differently: *the solution ${\vec{c}}\cdot {{\vec{n}^*}}$ to is an underestimate of the minimum plausible spending required to win so many times.* The second benefit of extending the domain of the optimization problem is to make the problem convex instead of combinatorial. The convexity allows us to show that any local minimum (as found by the computer) attains the global minimal value. \[global\] A local minimizer ${{\vec{n}^*}}$ for the optimization problem (relaxed to include non-integer values of $n_i$) attains the global minimal value. We shall show that the set of values of ${\vec{n}}$ over which we optimize, the *feasible set*, $$\label{global.S} \left \{ {\vec{n}}\in {\mathbb{R}}^b \mid \text{$n_i \ge w_i$ for all $i$} \right \} \cap \left \{ {\vec{n}}\in {\mathbb{R}}^b \mid \prod\nolimits_i D(n_i; w_i, p_i) \ge {{\varepsilon}}\right\},$$ is convex. As the *objective function* ${\vec{c}}\cdot {\vec{n}}$ is linear in ${\vec{n}}$, the claim follows. The first set in defines a polytope, which is clearly convex. Because the intersection of two convex sets is convex, it remains to show that the second set is also convex. The logarithm is a monotonic function, so taking the log of both sides of an inequality preserves the inequality, and we may write the second set in as: $$\label{global.S2} \left \{ {\vec{n}}\in {\mathbb{R}}^b \mid \sum\nolimits_i \log D(n_i; w_i, p_i) \ge \log {{\varepsilon}}\right\}.$$ For $0 \le x \le 1$ and $\alpha, \beta$ positive, the function $$\beta \mapsto \log I_x(\alpha, \beta)$$ is concave by [@FinnerRoters Cor. 4.6(iii)]. Hence $\log D(n_i; w_i, p_i)$ is concave for $n_i \ge w_i$. A sum of concave functions is concave, so the set is a convex set, proving the claim. If we solve for Louis Johnson’s wins—including not only his Pick 4 wins but also many of his prizes from scratcher games—we find a minimum amount spent of at least \$2 million for ${{\varepsilon}}= 5 \times 10^{-14}$. Monotonicity {#monotonicity .unnumbered} ------------ Some of the gamblers we studied for the investigative report claimed prizes in more than 50 different lottery games. In such cases it is convenient to solve for only a subset of the games to ease computation by reducing the number of variables. Since removing restrictions results in minimizing the same function over a set that strictly includes the original set, the resulting “relaxed” optimization problem still gives a lower bound for the gambler’s minimum amount spent. The man from Hollywood {#hollywood} ====================== Louis Johnson’s astounding 252 prizes is beaten by a man from Hollywood, Florida, whom we refer to as “H." During the same time period, H claimed 570 prizes, more than twice as many as Johnson did. Yet Mower’s news report [@PBP] stimulated a law enforcement action against Johnson but not against H. Why? All but one of H’s prizes are in Play 4, which is really different from scratcher games: if you buy \$100 worth of scratcher tickets for a single \$1 game, this amounts to 100 (almost) independent Bernoulli trials, each of which is like playing a single \$1 scratcher ticket. In Play 4, you can bet any multiple of \$1 on a number to win a given drawing; if you win (which happens with probability $p = 10^{-4}$), then you win 5000 times your bet. If you bet \$100 on a single Play 4 draw, your odds of winning remain $10^{-4}$, but your possible jackpot becomes \$500,000, and if you win, the Florida Lottery records this in the list of claimed prizes as if it were 100 separate wins. Clearly, these are wins on dependent bets. So, to infer how much H had to spend on the lottery for his wins to be unsurprising, first we have to estimate how much he bet on each drawing. Unfortunately, we cannot deduce this from the list of claimed prizes, because it includes the date the prize was claimed but not the specific drawing the ticket was for. (Louis Johnson’s Play 4 prizes were all claimed on distinct dates, so it is reasonable to assume they were bets on different draws.) The Palm Beach Post paid the Florida Lottery to retrieve a sample of H’s winning tickets from their archives. We think H’s winning plays were as in Table \[hollywood.table\]. date number played amount wagered ------------ --------------- ---------------------------- 12/6/2011 6251 \$52${$\hspace{0.35in}$}$ ?? ???? \$1[$\hspace{0.35in}$]{} 11/11/2012 4077 \$101[$\hspace{0.35in}$]{} 12/31/2012 1195 \$2[$\hspace{0.35in}$]{} 2/4/2013 1951 \$212[$\hspace{0.35in}$]{} 3/4/2013 1951 \$200[$\hspace{0.35in}$]{} : H’s Play 4 wins during 2011–2013[]{data-label="hollywood.table"} To find a lower bound on the amount H spent by solving the optimization problem , we imagine that he played several different Play 4 games, distinguished by their bet size. For simplicity, let us pretend that a player can bet \$1, \$50, \$100, or \$200, and suppose we observed H winning these bets 2, 1, 1, and 2 times, respectively. Using these as the parameters in and the same probability cutoff ${\varepsilon}= 5 \times 10^{-14}$ gives a minimum amount spent of just \$96,354. But we can find a number tied more closely to H’s circumstances. In 2011–2013, he claimed \$2.84 million in prizes. These are subject to income tax. If his tax rate was about 35%, he would have taken home about \$1.85 million. If he spent that entire sum on Play 4 tickets, what is the probability that he would have won so much? We can find this by solving the following optimization problem with $p = 10^{-4}$, ${\vec{w}}= (2, 1, 1, 2)$, and ${\vec{c}}= (1, 50, 100, 200)$: $$\max_{{\vec{n}}} \prod_{i=1}^4 D(n_i; w_i, p) \quad \text{s.t.} \quad \ w_i \le n_i {\quad\text{and}\quad}{\vec{c}}\cdot {\vec{n}}\le 1.85 \times 10^6.$$ The solution is about $0.0016$, or one-in-625: it is plausible that H was just lucky. That’s because he made large, dependent bets, while we know from the examples above that betting a similar sum on smaller, independent bets is less likely to succeed. This illustrates a principle of casino gambling from [@Dubins p. 170] or [@Mosteller \#37]: *bold play is better than cautious play*. If you are willing to risk \$100 betting red-black on a game of roulette, and you only care about doubling your money at the end of the evening, you are better off wagering \$100 on one spin and then stopping, rather than placing 100 \$1 bets. The real world ============== How did this paper come to be? One of us, Lawrence Mower, is an investigative reporter in Palm Beach, Florida. His job is to find interesting news stories and spend 4–6 months investigating them. He wondered whether something might be going on with the Florida Lottery, so he obtained the list of prizes and contacted the other three of us to help analyze the data. Below we describe some of the non-mathematical aspects. What some people get up to {#what-some-people-get-up-to .unnumbered} -------------------------- Various schemes can result in someone claiming many prizes. Clerks at lottery retailers have been known to scratch the wax on a ticket lightly with a pin, revealing just enough of the barcode underneath to be able to scan it, as described in [@Ombudsman paragraph 75]. If they scan it and it’s not a winner, they’ll sell it to a customer, who may not notice the very faint scratches on the card. Lottery operators in many states replaced the linear barcode with a 2-dimensional barcode to make this scam more difficult, but it still goes on: a California clerk was arrested for it on 9/25/14. Sometimes gamblers will ask a clerk to check whether a ticket is a winner. If it is, the clerk might say it’s a loser, or might say the ticket is worth less than it really is, then claim the prize at the lottery office—and become the recorded winner. Of course, most clerks are honest, but this scheme is popular; see, for example, [@Ombudsman paragraphs 47, 48, 80, 146]. Another angle, *ticket aggregation*, goes as follows. A gambler who wins a prize of \$600 or more may be reluctant to claim the prize at the lottery office. The office might be far away; the gambler might be an illegal alien; or the gambler might owe child support or back taxes, which the lottery is required to subtract from the winnings. In such cases, the gambler might sell the winning ticket to a third party, an *aggregator*, who claims the prize and is recorded to be the winner. The aggregator pays the gambler less than face value, to cover income tax (paid by the aggregator) and to provide the aggregator a profit. The market rate in Florida is \$500-\$600 for a \$1000 ticket. Some criminals have acted as aggregators to launder money. They pay the gambler in cash, but the lottery pays them with a check, “clean” money because it is already in the banking system. Notorious Boston mobster Whitey Bulger [@Bulger] and Spanish politician Carlos Fabra [@Fabra] are alleged to have used this dodge. When questioned by Mower, some of our suspects confessed to aggregating tickets, which is a crime in Florida (Florida statute 24.101, paragraph 2). Outcomes in Florida {#outcomes-in-florida .unnumbered} ------------------- Before Mower’s story appeared, he interviewed Florida Lottery Secretary Cynthia O’Connell about these gamblers. She answered that they could be lucky: “That’s what the lottery is all about. You can buy one ticket and you become a millionaire” [@PBP]. Our calculations show that for most of these 10 gamblers, this is an implausible claim. O’Connell and the Florida Lottery have since announced reforms to curb the activities highlighted here [@LotteryResponse]. They stopped lottery operations at more than 30 stores across the state and seized the lottery terminals at those stores. More news stories and outcomes in other states {#more-news-stories-and-outcomes-in-other-states .unnumbered} ---------------------------------------------- Further stories about “too frequent” winners have now appeared in California (KCBS Los Angeles 10/30/14, KPIX San Francisco 10/31/14), Georgia (Atlanta Fox 5 News 9/12/14, Atlanta Journal-Constitution 9/18/14), Indiana (ABC 6 Indianapolis, 2/19/15), Iowa (The Gazette, 1/23/15), Kentucky (WLKY, 11/20/14), Massachusetts (Boston Globe, 7/20/14), Michigan (Lansing State Journal, 11/18/14), New Jersey (Asbury Park Press, 12/5/14 & 2/18/15; USA Today, 2/19/15), and Ohio (Dayton Daily News 9/12/14). In Massachusetts, ticket aggregation is not illegal per se. In California, the lottery makes no effort to track frequent winners. In Georgia, ticket aggregation is illegal but the law had not been enforced. The practice was so widespread that elementary calculations (much simpler than those presented in this article) cast suspicion on 125 people. This gap in enforcement, in principle easy to detect, came to light as a consequence of the much more challenging investigation in Florida described here. This led to a change in policy announced by the Georgia Lottery Director, Debbie Alford, on 9/18/14: “We believe that most of these cases involved retailers agreeing to cash winning tickets on behalf of their customers — a violation of law, rules, and regulations.” Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to Don Ylvisaker, Dmitry B. Karp, and an anonymous referee for helpful comments and insight. The second author’s research was partially supported by NSF grant DMS-1201542.
Someone who purchased a Lotto ticket for the Aug. 6, 2016 drawing at the Wegmans on Latta Road in Greece, come on down. You just won $42,541. New York Lottery announced Monday that the winning ticket was sold at the store, 3177 Latta Road. The unidentified individual successfully matched five out of six numbers plus the bonus number. There was also a total of 16,544 Lotto winners overall for the Aug. 6 drawing. The numbers were 13, 18, 21, 27, 58, 59 with a bonus number 33. New York Lottery officials are encouraging players to recheck their tickets from Aug. 6. Tickets expire one year from the date of the game’s drawing. To learn how to redeem a prize, click here.
https://www.democratandchronicle.com/story/news/2016/08/08/winning-lotto-ticket-sold-greece/88396070/
The world is always in motion. And so are the people who inhabit it. Any more, people change homes and cars and jobs as often as they change their shirts. Plaintiffs’ attorneys occasionally encounter the problem of a lost client. They often learn that the client is missing when the insurance company extends an offer to settle. After the offer comes in, the Plaintiff’s attorney reaches out to the client to discuss the offer. It is then that the injured claimant cannot be found. In an attempt to address this problem, some inventive Plaintiff’s attorney devised what seemed to be a reasonable solution. That attorney included in the retainer agreement a provision that allowed the attorney to accept a reasonable offer in behalf of the client, even without the client’s approval. The provision also gave the attorney authority to sign the release in behalf of the missing client and to hold the funds of settlement in the attorney’s trust account until the claimant could be found. Having encountered a release signed by the Plaintiff’s attorney rather than the claimant, a defense attorney questioned whether such a practice was ethical. The State Bar of Nevada Standing Committee on Ethics and Professional Responsibility addressed the issue. In its Formal Opinion 35, December 11, 2006, the Committee said that the decision to settle belongs to the client and not the attorney and for an attorney to settle without consultation of the client is a violation of Nevada Rule of Professional conduct 1.2(a). A copy of the Opinion can be found here. The opinion cites a number of cases that support this proposition. i. In re Lansky, 678 N.E.2d 1114 (ln. 1997). A contingent fee agreement stating that “[The Clients] hereby authorize our attorney to settle this matter for any amount he determines is reasonable without further oral or written authorization” is held by the Indiana Supreme Court to violate the comparable Indiana rule. ii. In re Grievance Proceeding, 171 F. Supp. 2d 81 (D. Conn. 2001). A fee agreement delegating all settlement authority to the attorney is held by the U.S. District Court to violate the comparable Connecticut rule. iii. In re Lewis, 463 S.E.2d (Ga. 1995). A fee agreement granting the attorney “full power and authority to settle, compromise, or take such action as he might deem proper,” and to “execute any and all instruments” and receive the settlement proceeds is held to violate the comparable Georgia rule and to merit an 18-month suspension. Thus, if the attorney signs the release instead of the client, the legal efficacy of a settlement can be questioned. In other words, if a claimant later learns that the attorney settled his case, there is no doubt that attorney has acted inappropriately. But the claimant may well seek to undo the release and re-expose the insured to risks that a valid settlement would have avoided. If you are an adjuster, my recommendation is to postpone settlements until the signature of the claimant can be obtained. If you have questions about the efficacy of a settlement, please contact Mike Mills at 702-240-6060 x114. I hope to provide answers to your questions.
http://nevadainsurancelaw.com/accepting-release-signed-attorney-claimant-presents-risks/
Dr. David McKeown is Medical Officer of Health for the City of Toronto and Executive Officer of the Toronto Board of Health. He leads Toronto Public Health, Canada’s largest local public health agency, which provide public health programs and services for 2.7 million residents. He has worked in the public health field for more than 25 years and has served as Medical Officer of Health for East York, the City of Toronto prior to amalgamation, and the Region of Peel. He is an Assistant Professor (Status Only) in the Dalla Lana School of Public Health at the University of Toronto. As the newly engaged Chief Planner for the City of Toronto, Jennifer is committed to creating places where people flourish. Over the past decade Jennifer has been repeatedly recognized by the Canadian Institute of Planners, OPPI, the Design Exchange, + EDRA for her innovative work in Canadian municipalities. Her planning practice is characterized by an emphasis on collaborations across sectors, and broad engagement with municipal staff, councils, developers, business leaders, NGO’s and residents associations. Jennifer is the founder of Project Walk, which premiered its first short film in 2011, as an official selection at the TIFF. In 2012 Jennifer debuted her first TED talk, Walk to School. Jennifer is a graduate of the University of Western Ontario (combined honours English and Philosophy) and has a Master in Environmental Studies (Politics and Planning) from York University. Stephen Buckley serves as the General Manager for Transportation Services for the City of Toronto. In his role, Steve oversees approximately 1,080 staff, a $330 million dollar operating budget, a $220 million dollar capital budget, and roughly $10 billion in transportation infrastructure assets. Prior to joining the City of Toronto, Steve was the Director of Policy and Planning for the Deputy Mayor for Transportation and Utilities in Philadelphia. From 2008 through 2012, Steve served as Deputy Commissioner for Transportation in the Philadelphia Streets Department. Prior to working with the City, Steve was the Planning Manager for Parsons Brinckerhoff’s Philadelphia office. His professional experience has spanned numerous aspects of the field of surface transportation – from engineering and design to planning, policy and funding. Steve received his Bachelor of Science in Civil Engineering from Syracuse University. He obtained Masters degrees in Transportation and in City Planning from the University of California – Berkeley. Steve is an active member of the Transportation Research Board (TRB), where he sits on the TRB’s Committee on Transportation Issues in Major Cities. Jerry Dobrovolny is Director of Transportation for the City of Vancouver. He is responsible for the 300 staff who look after all aspects of transportation at the city including strategic planning, neighbourhood traffic calming, greenways and bikeways, parking enforcement, parking meters, street activities and festivals, traffic management, and blueways. He received his civil engineering degree from UBC, a Masters of Business Administration from SFU, and has worked at the city for 25 years. Jerry also served at a City Councilor for the City of New Westminster for nine years and played football in the CFL for five years. Timothy Papandreou has worked for public and private agencies in the transportation and land-use planning field for over 15 years both in the US and Australia. Timothy is currently Deputy Director of Sustainable Streets, Strategic Planning & Policy for the San Francisco Municipal Transportation Agency. The SFMTA is responsible for managing the modes of transportation (including parking and street enforcement) in the City. Timothy oversees a team of 30 staff to develop and implement the agency’s economically competitive, sustainable mobility goals through integrated, multi-modal (bicycle, walking, transit, car-sharing, parking and taxi) transportation plans, street design projects, and policies and programs to reduce private auto trips. As a sustainable mobility expert, Timothy represents the agency on several bodies including the California Transit Association, National Association of City Transportation Officials, American Public Transportation Association, and Transportation Research Board, the Asia Pacific Economic Cooperation and several international city and transport organizations. Timothy has an undergraduate degree in urban and regional planning from the Royal Melbourne Institute of Technology and master’s in Urban Planning from the University of California, Los Angeles. Timothy leads by example and bikes the talk. He lived in Los Angeles car-free for nearly 9 years using transit, his bicycle and occasional car rentals, and is car free in San Francisco Chicago Department of Transportation (CDOT) Commissioner Gabe Klein was appointed by Mayor Rahm Emanuel in May 2011. Under Klein, CDOT is a customer-focused agency that is a national leader in technology, multi-modal innovation and sustainable design that consistently makes a positive impact on quality of life for Chicago’s 2.6 million residents. Klein has previously worked in a number of leadership roles in transportation, technology, consumer services and consulting. Prior to Mayor Emanuel’s administration, he was Director of the Washington D.C. Department of Transportation. He also co-founded On The Fly, an electric vehicle vending company that was one of the first multi-unit and multi-channel street vending companies in the U.S. As Regional Vice President for Zipcar, Klein oversaw the carsharing system in the D.C. region. Klein also held Director-level roles at ProfessionaLink, a national technology-consultancy, where he led marketing and business development efforts into Fortune 1000 Companies. At the bicycle retailer Bikes USA, he was responsible for daily operations in all locations, spanning seven states. Klein grew up in the cycling industry, and avidly rode and worked in cycling-related ventures since childhood. He holds a degree in Marketing Management from Virginia Tech. Alyssa is an advisor, mentor, and cheerleader for all things Open Streets in her position at 8-80 Cities. She works with local governments, agencies, and organizations on the development and implementation of Open Streets programs based on best practices from Central and South America. She has facilitated Open Streets training sessions for diverse stakeholders as well as study tours to Guadalajara. Mexico for delegations from Canada, the United States and South Africa, to explore their high-calibre Open Streets program. Alyssa is a strong advocate for Open Streets programs as a multidisciplinary tool to promote healthy active lifestyles, encourage civic pride and engagement, build support for infrastructure improvements and make people of all ages, abilities and economic status feel welcome in their city. Fiona Chapman is the Manager of Pedestrian Projects, Transportation Services with the City of Toronto. She is primarily responsible for the implementation of the City’s Walking Strategy – a set of 52 actions designed to create high quality pedestrian environments and foster a culture of walking in all of Toronto’s neighbourhoods. Since 2003, Fiona has spent a significant amount of time involved in key city-building issues and has worked on a broad range of policy and operational issues related to parks, licensing, planning, housing, social research and active transportation issues. Prior to joining the city, she served as the Executive Director of a national charitable foundation, where she helped launch a nation-wide public education campaign to reduce infant mortality. She has served as a School Trustee in downtown Toronto, has a degree in political science, is an active volunteer, and also trained as a midwife. Loy is the Director of Transportation Planning for the Transportation and Community Planning Department for the Regional Municipality of York. Loy has 25 years of experience, most of which is in transportation planning covering master plans, TDM and demand forecasting and modelling. He has worked for York Region, City of Toronto, University of Toronto and also as a freelance consultant. He is one of the founders and a key player in the development and implementation of the Smart Commute Initiative, a TDM project covering the Greater Toronto Hamilton Area. Emma is a recent graduate from the University of Toronto with a Master of Science in Urban Planning. With a socio-spatial background, she fosters a strong interest in active transportation, city building at the strategic and local level, and the importance of community engagement in the planning process. She has both academic and professional experience in projects related to cycling for transportation, open streets (ciclovía) programs, the creative and temporary use of vacant city space, and participatory design workshops. Most recently, as a Researcher with the Toronto Cycling Think & Do Tank, Emma completed an extensive literature review compiling key methods, experiences and studies, with a focus on identifying barriers and fostering cycle-friendly behaviour change. From here, Emma developed a framework for an integrated suite of tools to increase cycle use in daily transport. Daniel Egan, Manager of Cycling Infrastructure and Programs for Toronto Transportation Services, is responsible for planning and implementing cycling infrastructure and programs, including the bicycle lanes and trails, bicycle racks and bike stations, Bike Week, and other cycling promotion activities. When Daniel was also responsible for the City’s pedestrian program, he managed the development of Toronto’s first Walking Strategy, which recently won the Federation of Canadian Municipalities Award of Excellence for leadership and innovation in the transportation category. Jayne Engle-Warnick is an urban planner with an international background working in public, private, and not-for-profit sector settings in the fields of urban revitalization policy and programs, community and economic development, and sustainability planning. She has lived and worked in Canada, the United States, and Eastern and Western Europe. After 20 years in practice, Jayne has recently returned to PhD studies at the McGill University School of Urban Planning in order to focus research on planning resilient communities with fieldwork in post-earthquake Haiti. Jayne holds master’s degrees in urban and regional planning and an MBA in real estate and urban land studies. She is a member of the American Institute of Certified Planners, the Canadian Institute of Planners, and the Ordre des urbanistes du Québec. She works at the Montréal Urban Ecology Centre as Senior Planner & Project Lead for the project “Active Neighbourhoods Canada / vers un réseau de Quartiers verts, actifs et en santé”. Trevor Haché is the Policy Coordinator at Ecology Ottawa, and was amongst the founding Board Members of this grassroots, non-profit, municipally-focused environmental organization, which is working to make Ottawa the green capital of Canada. He and his family live in the suburban community of Kanata and he is working to bring Complete Streets to the nation’s capital. Kate has a M.Sc. in planning with 20 years experience working in community development as both a professional and a leadership volunteer. Her work focuses on creating healthy, active communities through active transportation planning. As a consultant with the Canada Walks department of Green Communities Canada, Kate is leading the WALK Friendly Ontario project, a province-wide designation to recognize municipalities for their efforts to create and improve the conditions for walking. Chris Hardwicke is a senior associate of Sweeny Sterling Finlayson &Co Architects, a Registered Professional Planner and an urban designer with over 14 years of experience. He is a fellow of the Urban Design Institute in New York, a Recognized Practitioner in Urban Design in the UK, a member of the Council for Canadian Urbanism, and a founding member of the Project for Public Spaces Leadership Council. Chris was the design advisor for the Complete Streets by Design report published by the Toronto Centre for Active Transportation and was the project lead for the award winning City of Saskatoon’s Public Space, Activity & Urban Form Strategic Framework that won the Premier’s Award for Excellence in Community Planning, the Canadian Institute of Planners New and Emerging Planning Initiatives Award and the International Downtown Association’s Award of Merit. Chris is a member of the TCAT Steering Committee. Anne Harris is an epidemiologist and assistant professor at Ryerson University’s School of Occupational and Public Health. She earned her doctorate at the University of British Columbia’s School of Population and Public Health in Vancouver and moved to Toronto in 2011 for a postdoctoral fellowship at the Occupational Cancer Research Centre associated with Cancer Care Ontario. She joined Ryerson University in the fall of 2012. Anne is a co-investigator on the Bicyclists’ Injuries and the Cycling Environment (BICE) study, a large team project designed to assess associations between infrastructural characteristics and risk of injury in Vancouver and Toronto. Anne is particularly interested in questions of study design and analyses for environmental and occupational influences on health. She lives and works in Toronto with cycling as her main mode of transportation. Jacquelyn is the Manager of the Cycling Office at the City of Mississauga, responsible for overseeing the implementation of cycling infrastructure, policy, programs and promotion. Jacquelyn has over ten years of experience in the field of sustainable transportation and Transportation Demand Management and holds a Bachelor’s Degree in Environmental Studies from York University. Jacquelyn is a member of the Association for Commuter Transportation Canada and the Association of Pedestrian and Bicycle Professionals. She lives in Mississauga with a young family and loves the view from the Confederation Parkway bridge over Hwy 403 on her bike ride to work. Jacquelyn is also a member of the TCAT Steering Committee. Sean Hertel, MES, MCIP, RPP, is a Toronto-based professional planner with his own consulting practice; leading a wide spectrum of projects incorporating transit-oriented development, policy formulation, public engagement and facilitation, and urban design. Sean is currently leading the land use planning for a new transit line in the southwestern suburbs of Chicago, as part of an international consulting team. Sean has a particular expertise and interest in the urbanization of the suburban landscape; formerly managing York Region’s Centres, Corridors + Subways Program, and currently conducting suburban governance research at the City Institute at York University. Sean was the first staff writer for the Toronto urban affairs newsletter Novae Res Urbis and was the founding editor of the Greater Toronto Area edition. He is an occasional contributor to this and other urban-focused publications. As part of the Strategic Transportation Planning unit, Zlatko Krstulic, P. Eng., Transportation Planner, has been involved in the design and implementation of cycling Infrastructure across the City. Zlatko has also developed recommendations for a cycling safety Improvement Plan designed to address problem intersections for Ottawa cyclists, and is currently managing the update of Ottawa’s Cycling Master plan. He is interested in policy issues related to the promotion of Active Transportation, as well as cycling trends analysis. Trudy recently graduated from the University of Toronto with an M.A. in Environmental History. Winner of the Sister St. John Scholarship (2011), as well as a Bronfman Leadership Scholarship (2010), her article “Gore’s Box: Wealth, status and the environment?” was published in VOXII, the Woodsworth College Undergraduate Journal (2010). Committed to working for sustainability, Trudy is currently project co-ordinator and researcher with the Toronto Cycling Think & Do Tank. Trudy has broad experience in project management, sustainability practice, publishing and marketing. She is co-author of the recently released Toronto Cycling Think & Do Tank’s report: Mapping Cycling Behaviour in Toronto. Kate Mulligan is a Research Consultant for Healthy Public Policy with Toronto Public Health. Kate has a PhD in health geography from McMaster University and experience in research and advocacy for healthy public policies at municipal, provincial and international levels. Kate’s current focus is on bringing together health evidence, community engagement and policy research to support a healthier built environment and more active city. Emily’s experience growing up on a small rural island in Ontario is not an obvious stepping stone to a career in urban issues. However, after completing a Bachelor of Arts in International Development at McGill University in Montreal, and visiting and working in cities around the world such as Bangkok, Tokyo, Delhi and Hamburg, Emily developed a deep interest in community development and urban issues. After returning to school to complete an acclaimed Graduate program in International Project Management at Humber College in Toronto, Emily joined the 8-80 Cities team. As Director of Programs and Partnerships, Emily is always excited to develop meaningful relationships with partners and to design impactful community-based programs to create more people friendly cities. Trained in facilitation and participatory community assessments, Emily is passionate about authentically engaging people in the process of community development and urban design. Emily will be one of the Workshop Leaders of the People Making Places: The 8-80 ToolboxHands-on Workshop at the Complete Streets Forum. Sue has worked with the City of Peterborough since 1995. Initially, she worked in developed our waste diversion programs and now she works in the Transportation Division. Her job title is Transportation Demand Management Planner, and the essence of her work is to reduce the number of vehicle trips taken within the city. Reducing vehicle trips improves our quality of life by reducing traffic, road congestion, noise, air pollution while introducing ways to be physically active by walking and cycling. One of Sue’s projects is the improvement and expansion of the city’s trail system. Brett Sears is a Senior Project Planner in the Transportation Planning Department of MMM Group’s headquarters office in the Toronto area. He has 13 years of experience in transportation planning and traffic analysis in Canada, the United States and the Middle East. Brett has prepared transportation master plans for jurisdictions in Ontario, the country of Qatar and the Los Angeles metropolitan region. He recently was part of the team that completed the Town of Ajax Transportation Master Plan Update and presently is working on the City of Greater Sudbury TMP and the County of Simcoe TMP Update. Brett is a member of the Canadian Institute of Planners and is a Registered Professional Planner in Ontario. Over the past 9 years, Justin has had the opportunity to engage in a variety of transportation and urban planning works both as a consultant and as a public servant. Environmental assessments, master planning, accessibility audits, transportation studies, traffic modelling, development charges, pop-up projects, transportation impact assessment, and neighbourhood traffic calming are some examples of his historical involvements. Justin is a professional engineer currently working as a senior project manager with the City of Ottawa’s Planning and Growth Management department focusing on transportation related projects. Jonathan Tong is a transportation consultant with more than 10 years’ experience in the transportation planning industry. He is an expert in project evaluation, including economic appraisal, multiple account evaluation and business case development. Project evaluation is an important part of the project development cycle as it helps decision makers to identify the optimal solution from a range of options in a systematic, robust and transparent way. It also sets out the rationale for the project so that the benefits can be measured against objectives, and this can support discussions with potential funders and investors. His work spans across a range of modes from walking and cycling to urban transit and commuter rail. In addition to the City of Toronto Wayfinding Strategy project, Jonathan has lead the project evaluation workstream on a number of high-profile transit projects in the Greater Toronto and Hamilton Area, including the GO Rail Electrification Study, Hamilton Rapid Transit and the Hurontario-Main Street Rapid Transit, for which “complete streets” is a key part of the project vision. Ryan is TCAT’s Complete Streets Project Manager and CLASP Facilitator. He is responsible for providing support to Canadian municipalities interested in adopting Complete Streets policies and working in collaboration with Toronto Public Health to develop and implement a community engagement strategy around active transportation in two Toronto neighbourhoods: The Annex and Black Creek. Ryan has extensive experience related to active transportation policy, planning, and community engagement in a variety of cultural contexts. He has spoken at a number of conferences and events in Canada and around the world including the 2012 Walk 21 Conference in Mexico City, the 2012 Conference of the Ontario Professional Planners Institute, and the upcoming 2013 Velo-City Conference in Vienna. Ryan has also written for a variety of blogs, magazines, and journals about sustainability planning and active transportation, including Spacing, Ground Magazine, and the Ontario Planning Journal. Ryan has a M.Sc. in Geography, Urban, and Environmental Studies from Concordia University in Montreal. Katie is a current graduate student at the University of Toronto, completing the Master of Science in Planning program. Her current research interests include active and public transportation, as well as cultural norms and barriers that affect environmental sustainability. Within the area of active transportation, she is particularly interested in connections to land use and public health. She has been working with the Toronto Cycling Think & Do Tank in the social and civic infrastructure stream, with a focus on mapping components. She joined the research team in December 2012 and will continue to participate in the project this summer. Katie also works part-time at the Sustainability Office at the University of Toronto, and enjoys cycling for both recreation and transportation. Leslie Woo leads transportation planning for Canada’s largest urban region. The key author of The BIG Move, a 25 year transportation plan, resulting in over $16B in transit expansion and 430,000 new jobs, Leslie champions transit-oriented development, design excellence and the women’s network at Metrolinx. Educated in architecture, environmental studies and urban planning, Trinidad-born Leslie is a Fellow of the International Women’s Forum, has membership with: Scientific Strategy Council of the Institut pour la ville en mouvement; Urban Land Institute Toronto Advisory Board; TAC’s Urban Transportation Council Executive; and a retired member of the OAA.
http://www.tcat.ca/complete-streets-forum-2013-speakers/
Tag Archives: research I admit that I began reading to find evidence that supported my view. I was sure science would back me up. And then I, complete with a list of Harvard-format references, would win the next debate. The problem was that science didn’t agree with me. I could find pockets of supporting evidence, but the overwhelming consensus was that I was wrong. It made me phenomenally uncomfortable to see a consensus emerging that directly contradicted my beliefs about how the world should be. But ultimately, science is not about how the world should be. Science is about how the world is. And if I want to make it different, I need an accurate picture of how it is now. This is a blog on reading about genetics – what I’ve learned, and what I want to do about it. The lessons are the things that struck me as a teacher most, and are largely based on reading about intelligence or other cognitive characteristics. I want to note up front that I am not an expert, and that writing this is part of my attempt to learn more. If and when I’ve made a mistake please let me know and I will rectify it as quickly as possible. Lesson #1: Genetics explains much more of the variation between people than I was willing to accept The most influential type of study for this blog is the twin study, which looks to explain the variation between people and attribute it to one of three categories of cause: the shared environment (factors that would be common to a pair of twins living in the same household and attending the same school); the non-shared environment (all other environmental factors); and genetics. Studies of cognitive ability tend to find that variation is 10-20% shared environment, 30-40% non-shared environment, and 50-60% genetics. This was hard for me to accept. I wanted to believe that most variation is caused by things within our control. Instead the shared environment, of which school is only one part, explains under a fifth of the variation between people. Before moving on to Lesson #2 it is important to stress that we are talking about variation here – not absolute levels. We can say that 50% of variation in intelligence is genetic in origin, we can say nothing about how much of your intelligence is caused by your genes. Lesson #2: Environments change how genes are expressed Your genetic code is fixed, but what you do with it isn’t. Geneticist Nessa Carey likens your genetic code to the script of a play, which is then interpreted extensively by the actors and director before becoming the performance that eventually appears on stage. Epigenetics is the study of these interpretations, which are as crucial to our biological functioning as the genetic code itself. One thing we learn from epigenetics is that our environment shapes how we interpret our genetic code, and that these interpretations stick. Once we have scrawled annotations over our genetic script they will stay unless we actively rub them out – and when our children inherit our scripts they will find many of our annotations still in place. Extreme or consistent environmental stimuli can create epigenetic modifications that change how your genes are expressed. For example, a stressful environment can lead to genes that control the production of cortisol (a stress hormone) becoming over-expressed, meaning that you become much more easily stressed in the future. Such a change will then persist, shutting off chunks of working memory and reducing executive function in years to come. As many epigenetic modifications are heritable it is difficult for twin studies to separate their effect from the effect of genes themselves, and so it is possible that some of the causal impact of genetics is actually environmental in origin. As our ability to do more complex analysis with the genome itself increases we will no doubt find out whether this possibility means anything in practice. Lesson #3: Environments correlate with genes A tall child is more likely than a short child to be asked to try out for the basketball team. Where we may have a genetic propensity towards a certain area we tend to seek out (or be pushed towards) an environment that increases that propensity. This means that a small effect that begins life as just an inkling of interest or talent can easily grow into a specialised environment that exaggerates initially small effects. It is possible that correlations like this are responsible for a significant proportion of our genes’ impact. If we adapt environments, whether consciously or not, we will be magnifying genetic differences. Lesson #4: Genetics does not determine outcomes Twin studies observe the differences we see today and explain their origins. They do not have any say in how big these differences are or will be in the future. So the fact that 50% of variation in a characteristic is due to genetic causes today does not mean that it must be tomorrow. Nor does it mean that we must accept present levels of difference as necessary. The numbers we find in these studies are not natural constants. Behavioural geneticist Robert Plomin says that studies tell you what is, not what could be. He likens our genetic understanding of intelligence to our understanding of weight. Whilst it is obviously the case that people can be genetically predisposed to put on more or less weight than each other, it is also true that with the right interventions almost anybody can achieve a healthy weight. The same is true for intelligence. There may be genetic predispositions, but we can make sure that everybody achieves an acceptable level by providing the right environment. Lesson #5: Genetics assumes determinism If a study assumes that all difference is caused by either genetics, the shared environment or the non-shared environment, then it is also making one other underlying assumption: that everything about a person has an external cause. It assumes that your intelligence or success is a function solely of your genes and your environment. But what if everything about us is not 100% deterministic? I was hesitant to write this lesson down, for fear of seeming to criticise the entire body of work genetics has given us. Science has to operate by studying the relationship between cause and effect. I cannot challenge it for failing to account for independent free will. But I am nonetheless uncomfortable not accounting for it. I do not know nearly enough in this area to do anything more than speculate. But I do wonder whether the large influence of the non-shared environment, that bucket for everything we can’t put our finger on, may be substantially down to things like attitude and motivation that may not be fully caused by an external mechanism. So what do I take from all this?Firstly, that genetics plays an unquestionably big role in explaining who we are, and how our brains work. Even if some of the effect attributed to genes is in fact environmental in origin (due to epigenetics or gene-environment correlations) there is no doubt that genes have a huge influence. But secondly, even though genes make us all different, they don’t determine our cognitive future. Long-term memory still has unlimited capacity; brain plasticity is still immense; and good teaching can still take advantage of this. Genes determine difference, but they’re no excuse for educational inequality. Laura McInerny’s third touchpaper problem is:“If you want a student to remember 20 chunks of knowledge from one lesson to the next, what is the most effective homework to set?”After a day of research at the problem-solving party, I came to this worrying conclusion:Setting homework to remember knowledge from one lesson to the next could actually be bad for their memory.So stop setting homework on what you did in that lesson – at least until you’ve read this post.Components of MemoryBjork says that memories have two characteristics – their storage strength and their retrieval strength. Storage strength describes how well embedded a piece of information is in the long-term memory, while retrieval strength describes how easily it can be accessed and brought into the working memory. The most remarkable implication of Bjork’s research surrounds how storage strength is built. Storage and Retrieval strength – courtesy of Kris Boulton Retrieval as a ‘memory modifier’Good teaching of a piece of information can get it into the top left hand quadrant, where retrieval strength is high but storage strength is low. Once a chunk of knowledge is known (in the high retrieval sense of knowing), its storage strength is not developed by thinking on it further. Rather storage strength is enhanced by the act of retrieving that chunk from the long-term memory. This is really important. Extra studying doesn’t improve retention. Memory is improved by the act of retrieval.The ‘Spacing Effect’Recalling a chunk of knowledge from the long-term memory strengthens its storage strength. However for this to be effective, the chunk’s retrieval strength must have diminished. ‘Recalling’ a chunk ten minutes after you’ve studied isn’t going to be very effective, as your brain doesn’t have to search around for such a recent memory. Only when a memory’s retrieval strength is low will the act of recall increase storage strength. This gives rise to the spacing effect – the well-established phenomenon that distributing practice across time builds stronger memories than massing practice together. Rohrer & Taylor (2006) go a step further and compare overlearning (additional practice at the time of first learning) with distributed practice. They find no effect of over learning, and ‘extremely large’ effects of distributed practice on future retention.Optimal intervalsThere is an optimal point for recalling a memory, in order to maximise its storage strength. At this point, the memory’s retrieval strength has dropped enough for the act of retrieval to significantly increase storage strength, but not so much to prevent it from being accurately recalled. Choosing the correct point can improve future recall by up to 150% (Cepeda, et al., 2009).There has been a common design of most studies into optimal spacing. Subjects learn a set of information at a first study session. There is then a gap before a second study session where they retrieve learned information. Before a final test there is a retrieval interval (RI) of a fixed time period. Studies such as Cepeda, et al (2008) show that the optimal gap is a function of the length of the RI, and that longer RIs demand longer gaps between study periods. However this function is not a linear one – shorter RIs have optimal gaps of 20-40%, whereas longer RIs have optimal gaps of 5-10%.Better too long than not long enoughCepeda et al’s 2008 study looks at four RIs: 7, 35, 70, and 350 days. The optimal gaps for maximising future recall were 1, 11, 21 and 21 days respectively, and these gaps improved recall by 10%, 59%, 111% and 77%. Perhaps their most important finding is the shape of the curves relating the gap to the future retention. For all RIs these curves begin climbing steeply, reach a maximum, and then decline very slowly or plateau. The implication is that when setting a gap between study periods it is better to err on the side of making it too long than risk making it too short. Too long an interval will have only small negative effects. Too short an interval is catastrophic for storage strength.Why homework could be badHomework is usually set as a continuation of classwork, where students complete exercises that evening on what they learned in school that day. This constitutes a short gap between study sessions of less than a day. We know that where information is to be retained for a week, the optimal gap is a day, and that where this is not possible it is better to leave a longer gap than a shorter one. For longer RIs, the sort of periods we want students to remember knowledge for, the optimal gap can be longer than a week.Therefore, if you want students to remember information twenty chunks of knowledge for longer than just one lesson to the next, the best homework to set is no homework!Setting homework prematurely actually harms the storage strength of the information learned that day by stopping students reaching the optimal retrieval interval. In this case, students who don’t do their homework are better off than ones who do!Why I might be wrong, and what we need to do nextThere is not enough good evidence of how to stagger multiple study sessions with multiple gaps. For example, we do not know where it would be best to place a third study session, only a second. However we do know that retrieval is a memory modifier, and so additional retrieval should strengthen memories as long as the gap is sufficiently large for retrieval strength to have diminished. Given we know that retrieving newly learned information after a gap of one day is good for storage strength, it may be that studying with gaps of say 1, 3, 10 and 21 days are better for storage strength than a solitary study session after 21 days, where the RI is long (350 days or greater). In this case for teachers who only have one or two lessons a week, homework could help them make up the optimal gaps by providing for study sessions between lessons.The optimal arrangement of multiple gaps is a priority for research. We need to better understand how these should be staged, so that we can begin to set homework schedules that support memory rather than undermine it. Until then, only set homework on previously learned knowledge, and better to err on the side of longer delays. My students will be getting homework on old topics only from now on.BibliographyJoe Kirby on memory this weekendEEF Neuroscience Literature ReviewDunlosky, et al., 2013. Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational PsychologyRohrer & Taylor, 2006. The Effects of Overlearning and Distributed Practise on the Retention of Mathematics KnowledgeCepeda, et al., 2009. Optimizing Distributed PracticeCepeda, et al., 2008. Spacing Effects in Learning: A Temporal Ridgeline of Optimal RetentionEverything Kris Boulton writes
Nokia Wants to Conquer Apple by Changing Everything Courtesy of Benzinga. The following are the M&A deals, rumors and chatter circulating on Wall Street for Wednesday August 29, 2012: Hearing Chatter of Cisco Interest in Acme Packet The Rumor: Cisco (NASDAQ: CSCO) was rumored Wednesday to be considering a bid for Acme Packet (NASDAQ: APKT), according to sources. There was also mention that APKT has hired an advisor, possibly JP Morgan, to handle potential bids. In February, the company was rumored to be looking for $45 per share from potential buyers. Spokespersons for Acme Packet and Cisco were contacted by Benzinga, but neither would comment on the rumor. Acme Packet closed at $19.73 Wednesday, a gain of 10.41% on 2.5 times average volume. Hearing Private Equity Chatter in Seagate The Rumor: Seagate (NASDAQ: STX), which has been rumored frequently recently to be putting together a deal to acqure OCZ Technology (NASDAQ: OCZ), was itself the subject of a takeover rumor on Wednesday. Reports circulated that Seagate was being looked at by private equity firms. A spokesperson for Seagate had no comment on the rumor. Seagate closed at $33.59 Wednesday, a loss of 0.24% on lower than average volume. Report Gleacher is for Sale The Rumor: Gleacher (NASDAQ: GLCH) has put itself up for sale and has hired Credit Suisse as an adviser, according to Bloomberg. A spokesperson for Gleacher was not available for comment. Gleacher closed at $0.76 Wednesday, a gain of 1.33% on almost twice the average volume. Stratex CEO Says He Does Have Financing in Place for Magellan Deal The Offer: Stratex Oil & Gas Holdings (OTC: STTX) CEO Stephen Funk spoke with Benzinga Wednesday regarding his bid for Magellan Petroleum (NASDAQ: MPET). Stratex offered $2.30 per share on Monday to acquire Magellen. Funk told Benzinga he did have the financing in place to complete the deal. Magellan responded to the Stratex offer on Wednesday, saying the Board was in receipt of the offer letter, but had not yet had a chance to review it. The unsolicited offer was received following a brief meeting between the two company CEO’s in the Spring of 2012. Funk said he met with Magellan CEO Thomas Wilson in April, but did not reveal what they had discussed. Asked about some of the legal matters in his past, Funk said, “that was 13 years ago”. He indicated he had moved on with a clean slate. Magellan Petroleum closed at $1.31 Wednesday, a gain of 3.15% on almost 9 times average volume.
https://www.philstockworld.com/2012/09/05/nokia-wants-to-conquer-apple-by-changing-everything-7/
ABSTRACTThere is a fundamental gap in our understanding of how host-parasite interactions maintain genetic variationwithin species, including humans. Interactions between humans and long-lived eukaryotic parasites may be themost important agents of natural selection across the human genome and may be responsible for themaintenance of genome-wide functional variation within humans (balancing selection). However, linking theagents of balancing selection with their genomic targets remains a major challenge. Continued existence of thisgap is an important problem because until it is filled there is a limited understanding of the mechanismsresponsible for potential maintenance of genetic variation within species. The long-term goal of the investigator'slaboratory is to understand the genetic basis of host-parasite adaptations. The objective over the next five yearsis to identify agents and targets of selection arising from host-parasite interactions. The central hypothesis is thathost-parasite interactions maintain genetic variation within species. The rationale is that transitions to parasitismon the genetic model plant Arabidopsis thaliana has occurred within the genetic model Drosophila lineage,allowing in-depth study. Guided by strong preliminary data, this hypothesis will be tested by pursuing these twooverarching research questions: 1). Identify molecular genetic changes that underpin the transition to parasitismin a fly, 2) Determine if host-parasite interactions lead to the maintenance of genome-wide variation in flies andplants. Under the first question, the genomic architecture underlying the evolutionary transition to parasitism willbe identified in the Drosophilidae. Next-generation sequencing and comparative genomics studies will identifygenes necessary for the evolution parasitism from free living fruit flies. Preliminary studies show that thisapproach holds great promise for finding ?parasite-genes? and that the approach is feasible in the applicants'hands. Under the second question, populations of parasitic flies will be evolved with single or mixed hostgenotypes that vary in resistance traits. An evolve-and-resequence approach will test if genome-wide variationis maintained by balancing selection in flies. In the plants, a genome-wide association (GWAS) study approachwill be used to identify loci associated with resistance to flies. The applicants have shown that these approacheswill identify targets of balancing selection. Under both aims, functional studies using in vitro and in vivoapproaches will be used to link evolutionary patterns with functional phenotypes. The proposed research issignificant because it will be the first study in a continuum of research expected to lead to an integrativeunderstanding of the role that host-parasite interactions play in shaping patterns of genome evolution. There ispromise that general principles will be discovered relating to the role host-parasite interactions play in themaintenance of genetic variation. The research proposed is innovative because it represents a departure fromcurrent approaches to studies on the evolution of host-parasite interactions, which are restricted to microbes ornon-model systems.
https://arizona.pure.elsevier.com/en/projects/co-evolutionary-genetics-of-host-parasite-interactions
03 June 2016 The EU has ambitious plans for reducing air pollution and climate change by 2050. Yet while the policy plans are separate, greenhouse gases and air pollutants come from the same sources, such as transportation and industry. Air quality and climate change are also linked at a far more basic level: chemistry. The air pollutant ground‑level ozone, for example, is of great concern because of its detrimental impact on human health and the environment. Ozone forms when certain compounds released by plants are oxidized in the presence of nitrogen oxides, a type of pollutant released through combustion. Projections for air pollution in 2050 show that ozone levels should decrease as air pollution goes down. But the processes that form ozone work faster at higher temperatures, so as climate change leads to hotter summers, ozone damage could increase. Finally, climate policies to increase bioenergy production could further add to ozone damage through an increase in the compounds that can be converted to ozone. In new research, conducted as part of the YSSP’15, Dutch PhD candidate Carlijn Hendriks explored these complex interactions. Using multiple models of economic markets, greenhouse gas emissions, land‑use change, and air chemistry transport, Hendriks found that the effect of a warming climate on ozone production was the key factor, far outweighing the reductions in ozone that would come from current EU air quality policy by 2050. The smallest effect came from bioenergy production, which would cause only a slight increase in ozone damage.
https://iiasa.ac.at/web/home/resources/publications/options/s16_Climate_change_policies_could_increase_smog.html
Ghana National Cleaner Production Centre (GNCPC) The Ghana National Cleaner Production Center (GNCPC) is a subsidiary of the Environmental Protection Agency (EPA) of Ghana and was inaugurated on 20th January, 2012 and registered as a company limited by guarantee in March, 2014. The mandate of the centre is to develop and implement projects and activities that will promote concepts of resource efficiency and cleaner production initiatives and capacity building as well as low-carbon clean technology in areas of energy, water and raw-material efficiency and waste management practices in industries that would result in reduced manufacturing cost, lowered environmental pollution and toxicity in products, increased resource productivity & profits and improved health and safety performance. The GNCPC’s focus is to develop, offer and promote business advisory services and investment porfolios businesses, coaching, promote policy reforms at the SMEs and national level that influence compliance , undertake investment activities including shareholdings in businesses relating to RECP promotion and well as engaging in consultancy services to keep the centre in business. Under the “Ghana E-waste Model” (GEMOD) project, GNCPC has provided extensive inputs on the set-up of collection centres and contributed to drafting of a comprehensive e-waste legislation, thus contributing to the implementation of the Hazardous and Electronic Waste Control and Management Act, Act 917 as well as its complementing Legal Instrument (LI) 2250. GNCPC is also currently implementing the Swiss funded project “Sustainable Recycling Industries” (SRI) with the aim to supporting the sustainable integration and participation of SMEs in developing countries the global recycling of secondary metals in close cooperation with informal workers. Due to the strong ties with governmental agencies and the Ministry of Environment, Science, Technology and Innovation (MESTI), GNCPC will take the lead in WP3 to (re-)establish a collection mechanism as per the enforcement requirements of Act 917 and LI 2250. |Website||https://ncpcgh.org/| |[email protected]| |Phone|| | +233 (303) 331 009/ 331 010/ 331 012,
https://e-magin-ghana.com/node/64
Ohm’s law for Kids Ohm’s Law is a basic law of physics which states that current passing through a conductor is proportional to the input voltage. Mathematical Equation V ∝ I Where ‘V’ is voltage and ‘I’ is current and ‘R’ is constant of proportionality. R is actually resistance of conductor which is assumed constant. Removing this proportionality we get: V = IR We can use this formula to find current and resistance as well: I = V/R and R = V/I The Ohm’s Triangle You can easily memorize the above equations from the triangle. - Draw a triangle. - Divide it into 3 parts - Mention V on top side - Mention I on bottom left side - Mention R on the bottom right side.
https://ohmlaw.com/ohms-law-for-kids/
Are the floodgates ready to open? When the JOBS Act was signed into law in 2012, it laid the groundwork for new forms of small business capital, like crowdfunding, to take hold. Since then, a number of bills have sought to build on that foundation, yet very few have made it into law. That could be about to change. With an incoming administration that is determined to shake things up, lawmakers are dusting off a variety of bills that, taken together, promise to further transform the small business capital landscape. Here’s a roundup of some of the key pieces of legislation that will be moved to the front burner in the coming year. Helping Angels Lead Our Startups Act (HALOS) – One of the first bills out of the gate is the HALOS Act, which was reintroduced on January 4 by a bipartisan group of legislators. HALOS would clarify rules for “demo days” and other events where entrepreneurs pitch their companies to potential investors. Currently, pitch events operate in a gray area that risk running afoul of “general solicitation” prohibitions. The bill would exempt demo days and other such events from being considered general solicitation under Regulation D to protect startups from inadvertently violating this rule. UPDATE: the HALOS Act passed the House on Jan. 10 by a vote of 344 to 73. Impact: Would remove the ambiguity around pitch events, making them a more effective forum for connecting entrepreneurs and investors. Financial Services Innovation Act (aka “regulatory sandbox”) – Introduced by Patrick McHenry (R-NC) last fall, the Financial Services Innovation Act will get a serious hearing this year. Its objective is to promote responsible innovation by allowing financial startups to experiment with new models—without fear that regulators will pounce. The bill is sometimes referred to as the “American regulatory sandbox” after a similar law established in the UK in May 2016. The US law would create a Financial Services Innovation Office (FSIO) in each of 12 agencies charged with financial oversight, including the Securities & Exchange Commission (SEC), Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC) and Farm Credit Agency (FCA). Startups could petition the appropriate office to waive or modify rules that pertain to a new product or service, as long as it serves the public and does not pose financial risk to the system or consumers. The agency would then have 30 days to accept or deny the petition. If the petition is accepted, the agency and the business would enter into an Enforceable Compliance Agreement, or ECA, that would establish a compliance plan. The ECA would be applicable across regulatory agencies and state lines. The FSIO system would replace the current practice of requesting a “no action” letter from regulators, a lengthy and, many say, flawed process. The Financial Services Innovation Act could be used by fintech startups as well large and small banks and credit unions. For more information, see this FSIO Explainer PDF put together by McHenry’s staff. Impact: Could encourage more innovation and experimentation in financial services and help regulators be a partner, rather than foe, in the innovation process. Fix Crowdfunding Act Perhaps the most closely watched bill will be another McHenry effort that is intended to fix flaws in the final Regulation Crowdfunding exemption. He first floated the bill before Regulation CF even took effect. But it was a watered down version that ultimately was passed by the House. That version allowed Special Purpose Vehicles (SPVs) which would let small investors be grouped into one legal entity to simplify a startup’s “cap table,” and clarified the threshold at which public reporting requirements are triggered. But it was missing many features that crowdfunding advocates had hoped for, including increasing the amount that companies can raise under the exemption, allowing them to “test the waters” for investor interest before they commit to the expense of a Reg CF offering, and clarifying the responsibilities and liabilities of funding portals. Now, with an incoming President Trump, McHenry is likely to reintroduce a more robust bill that will address a longer list of grievances. Impact: Would give Reg CF a much-needed boost and make it more attractive to issuers. Small Business Capital Formation Enhancement Act Passed with broad bipartisan support in the House last February, this bill would require the SEC to act on recommendations arising from the agency’s annual Small Business Capital Formation Forum. The event has been an important forum for raising and debating ideas for improving capital access for small businesses, yet the recommendations developed at the forum and presented to the SEC are often ignored by the agency. Impact: Would amplify the ideas and recommendations of small business advocates and introduce a measure of accountability into the SEC rule-making process. The bills are part of a broader agenda of regulatory reform that Congress is expected to take up. “There are a lot of great bills that we support,” says Karen Kerrigan, president & CEO of the Small Business & Entrepreneurship Council in Washington DC. “The challenge will be on the Senate side,” which has a lot on its plate, including confirmation hearings, she added.
https://www.locavesting.com/spotlight/legislation-to-buoy-crowdfunding-fintech-this-year/
Genetic study finds association between reduced vitamin D and multiple sclerosis risk Genetic findings support observational evidence that lower vitamin D levels are associated with increased risk of multiple sclerosis, according to a new research article by Brent Richards, from McGill University, Canada, and colleagues published this week in PLOS Medicine. Multiple sclerosis is a debilitating autoimmune disease that affects the nerves in the brain and spinal cord. There is no known cure for multiple sclerosis and it usually presents between the ages of 20 and 40 years. While some observational evidence suggests there may be a link between lower vitamin D levels and multiple sclerosis risk, it is difficult to infer a causal relationship because individuals who develop multiple sclerosis in these studies might share another unknown characteristic that increases their risk of multiple sclerosis (this is known as confounding). Using a genetic technique called Mendelian randomization to reduce the possibility of confounding the authors examined whether there was an association between genetically reduced vitamin D levels (measured by the level of 25-hydroxyvitamin D, the clinical determinant of vitamin D status) and susceptibility to multiple sclerosis among participants in the International Multiple Sclerosis Genetics Consortium study, which involves 14,498 people with multiple sclerosis and 24,091 healthy controls. The authors found that a genetic decrease in the natural-log-transformed vitamin D level by one standard deviation was associated with a 2-fold increased risk of multiple sclerosis. While the Mendelian randomization approach used by the authors largely avoids the possibility of confounding or reverse causation, the reliability of these findings may be limited by some of the assumptions made by the researchers during their analysis. Nevertheless the authors conclude, "genetically lowered vitamin D levels are strongly associated with increased susceptibility to multiple sclerosis. Whether vitamin D sufficiency can delay, or prevent, multiple sclerosis onset merits further investigation in long-term randomized controlled trials." The authors also note, "ongoing randomized controlled trials are currently assessing vitamin D supplementation for the treatment and prevention of multiple sclerosis ... and may therefore provide needed insights into the role of vitamin D supplementation."
https://medicalxpress.com/news/2015-08-genetic-association-vitamin-d-multiple.html
01 Sep 2021 The recent decision of the Court of Appeal in Shanghai Shipyard Co Ltd v Reignwood International Investment (Group) Company Ltd EWCA Civ 1147 has demonstrated neatly the importance of the parties to a shipbuilding contract having a clear understanding of the legal nature of English law guarantees. The key issue is often whether the guarantor is obliged to pay a claim regardless of whether the validity of that claim has first been established by agreement between the parties to the shipbuilding contract or, in the absence of agreement, through litigation or arbitration proceedings. If the document is drafted as a demand guarantee, the guarantor is obliged to pay regardless of any dispute under the shipbuilding contract. Demand guarantees are usually issued by a bank, for example, a refund guarantee by which a bank guarantees the shipyard's obligation to refund instalments. The bank has no interest in the underlying dispute, and will usually hold security from the shipyard to cover its liability under the guarantee. In contrast, if a guarantee is to be issued by a parent company, for example to secure the buyer's obligation to pay instalments, this is usually a "see-to-it" guarantee: the parent company does have an interest in the outcome of the underlying dispute and would not wish to pay a disputed claim until it had been proven in litigation or arbitration proceedings. To protect a shipyard from the consequences of its refund guarantor being obliged to pay a disputed claim, it is usual for the refund guarantee to provide that the guarantor's obligation to pay may be deferred, in the event of arbitration proceedings having been commenced, until the issue of an arbitration award. At first sight (and in the opinion of the first instance judge in this case) this may appear to convert a demand guarantee into a see-to-it guarantee, by making the document subject to the outcome of an arbitration award, and thereby dependent upon the underlying dispute. However, the Court of Appeal has confirmed this is incorrect, and therefore a dangerous assumption for a buyer to make. The consequence is that if the parties to a shipbuilding contract were to agree the provision of reciprocal guarantees, whereby the parent company guarantee mirrors the wording of the refund guarantee, the parent company then exposes itself to the same potential liability as the refund guarantor, i.e. may be obliged to pay a claim, even if it is invalid. It may be thought that the consequences are not severe, as the parent company would still have the protection of being able to defer payment until following the outcome of an arbitration. However, in order to obtain the right to defer the payment obligation under a demand guarantee, the wording of the guarantee which provides that protection must be strictly followed. If it is not, as occurred in this case, the protection is lost, and the demand must be satisfied immediately, regardless of its validity. Shanghai was the builder of a drillship for Opus (under a contract based on the CMAC Standard Newbuilding Contract (Shanghai Form)). Reignwood was the guarantor of Opus's obligations. Opus refused to take delivery of the completed vessel asserting that it had a number of significant defects and was not in a deliverable state. Shanghai rejected the assertions that the vessel was not in a deliverable state and claimed the final instalment from Opus on 11 January 2017. When Opus failed to pay, it made a demand of Reignwood under the guarantee on 23 May 2017. Shanghai commenced proceedings against Reignwood under the guarantee in September 2018. Reignwood commenced arbitration, on behalf of Opus, against Shanghai on 3 June 2019 under the shipbuilding contract. At first instance, the High Court judge held that the guarantee was a 'see to it' guarantee so liability under the guarantee only arose if the buyer was liable to pay the first instalment. Reignwood could refuse to pay under the guarantee until the dispute as to whether the final instalment was payable had been resolved. It did not matter that the arbitration was not commenced until after the demand under the guarantee. The Court of Appeal reversed the judge's decision, concluding that it was a demand guarantee and Reignwood was therefore liable to pay following the demand. It could not invoke the contractual clause to delay payment because the arbitration had not been commenced before the demand was made. The Court said that there should be no preconceptions about the nature of the instrument solely from the identity of the guarantor, i.e. just because Reignwood was not a bank, it could not be said that the guarantee was more likely to be a 'see to it' guarantee. The wording of the contract is the key factor. The commercial context of the guarantee was neutral as to the nature of the guarantee, but certain items of language indicated strongly that it was a demand guarantee. The first of these was the use of "ABSOLUTELY and UNCONDITIONALLY", which indicated that the guarantor's obligations were not conditional on the liability of the buyer. Further, clause 1, which set out the main guarantee obligation, said that Reignwood contracted "as primary obligor and not merely as surety". The obligation on the guarantor to pay was triggered "immediately" on "receipt by us [the guarantor] of your [the yard's] first written demand". Payment against demand is a classic hallmark of a demand guarantee. In addition, it would not be appropriate for a surety guarantee to require immediate payment, because the guarantor would need time to assess whether there was an underlying liability to pay the final instalment. Another clause expressly provided that obligations on the guarantor were not to be affected by any dispute under the Building Contract. Clause 4 of the guarantee caused some of the confusion as to the type of guarantee this was. It allowed Reignwood to defer payment where there was a dispute between the buyer and the builder, until an arbitration award was produced. The Court of Appeal considered that, contrary to the judge's view, this supported the position that it was an on demand guarantee. The deferral of the payment did not convert the guarantor's obligation from primary to secondary. With the secondary obligation in a surety guarantee, the guarantor would not be bound by any arbitration award and could challenge a conclusion by the Tribunal that the buyer was liable. Clause 4 did not permit that; the guarantor remained obliged to pay on the issue of a document but changed that from the written demand to the arbitration award. The final issue was whether clause 4 required the dispute to be submitted to arbitration before the obligation to pay on demand was suspended. The Court of Appeal held that it did. To provide otherwise would be to remove Shanghai's accrued right to payment on a valid demand, and clearer language would be required to remove such an accrued right. Stuart Beadnall Partner T: +44 20 7809 2936 M: +44 7801 070 567 Email Stuart | Vcard Office:
https://www.shlegal.com/insights/surety-guarantee-or-delayed-demand
Out-Law News | 17 Jun 2014 | 3:25 pm | 3 min. read However, EAT president Mr Justice Langstaff said that employers should be careful to ensure that such terms "better represent the realities of the workplace" to avoid them being interpreted as penalties in future. In this case the employer, First Marine Solutions Ltd; and its former employee, Yizhen Li, accepted the original employment tribunal's interpretation of the clause as allowing the employer to deduct payments from an employee who did not work her notice. Mr Justice Langstaff said that this would not necessarily be true in every such case. "If a tribunal has a contract such as this to construe in future, it should ask itself whether the parties, in enacting a clause such as this, really intended that, if an employee left not having worked the full amount of notice – irrespective of whether the notice was given by the company or the employee – there should be paid by the employee to the employer a sum equal to the amount of time which was not spent working during that notice period, but should have been," he said. "I recommend to tribunals who consider such a clause in future that they may wish to think carefully, in the light of the evidence before them in the particular case, whether the parties actually intended a clause such as this to operate as a penalty clause, liquidated damages clause, or simply as a provision that entitled the employer to withhold pay for the period of time not worked during notice. All will, of course, depend upon the particular conclusions and the particular facts, contracts of employment being individual," he said. The clause only permitted the employer to deduct payments from wages due, not to receive or demand a payment, the judge said. Similarly, it did not oblige the employee to make that payment. This meant that the employer would have had no recourse if the employee "simply upped sticks and left" at the end of a salary period without working notice, or if the employee was dismissed by the employer for a proper reason, he said. Li was a project engineer and subject to a contract which said that if she did not work her notice, the employer would "deduct a sum equal in value to the salary payable for the shortfall in the period of notice" from her final pay. Following a dispute with her manager, Li resigned. However, she did not work her four-week notice period because she said that she had outstanding holiday. It later emerged that she was not due any holidays, so First Marine deducted four weeks' salary from her final pay. Although the parties were in agreement about the practical effect of the clause, they disagreed over how it should be interpreted. Li argued that the clause was in effect a penalty, which would make it unenforceable. First Marine said that it was not a penalty, but rather a 'genuine pre-estimate of loss'. This type of clause is generally valid. Previous cases had established that a sum which is "extravagant and unconscionable in amount" will generally be an unenforceable penalty. However, this is not decided with reference to actual loss but rather by "what was anticipated by the parties at the time they entered into the contract as being likely on termination", the judge said. "This contract, on the face of the contract itself, was for a very different type of employee," he said. "This was not just an engineer but a project engineer. She was engaged at a high salary compared to that at which would apply to a driver [as in a previous case, in which a similar clause was found to be a penalty]." "Engineers are not as common as are drivers, and to obtain one at short notice is always likely to be of particular difficulty. Significant expense might be incurred. When the contract as a whole is examined in the context within which it was made, to which reference must be had to decide what the parties intended at the time that they made the contract, it is plain that the fact that she was headhunted meant that the employer placed a particular value upon her services ... There is nothing, on the face of it, which means that a sum of a month's salary is necessarily excessive," he said.
https://www.pinsentmasons.com/out-law/news/eat-clause-in-contract-deducting-a-months-pay-for-failure-to-work-notice-not-an-unenforceable-penalty-clause
This prospective, naturalistic study examined the relationship between different exercise dimensions (i.e., frequency, intensity, duration, and omissions of planned exercise) and psychological well-being among community adults participating in self-selected exercise. For at least 2 months, participants kept daily exercise diaries and provided weekly ratings for depressed mood, anxiety, sleep quality, concentration, alertness, confidence, weight satisfaction, physical fitness, appetite, satisfaction with physical shape and appearance, and stress experienced. Linear mixed model analyses revealed positive associations between exercise frequency, intensity, and duration across a broad range of psychological and mood-related outcomes. In contrast, omissions of planned exercise were associated with a global and detrimental effect on psychological health. A main effect of age and a moderating effect of gender was observed in many of the models. This work contributes to literature on exercise dimensions and psychological constructs and informs future research that is needed to develop physical activity recommendations for improved mental health. Search Results You are looking at 1 - 2 of 2 items for - Author: Barbara Stetson x Maggie Evans, Kelly J. Rohan, Alan Howard, Sheau-Yan Ho, Patricia M. Dubbert and Barbara A. Stetson John Cooper, Barbara Stetson, Jason Bonner, Sean Spille, Sathya Krishnasamy and Sri Prakash Mokshagundam Background: This study assessed physical activity (PA) in community dwelling adults with Type 2 diabetes, using multiple instruments reflecting internationally normed PA and diabetes-specific self-care behaviors. Methods: Two hundred and fifty-three Black (44.8%) and White (55.2%) Americans [mean age = 57.93; 39.5% male] recruited at low-income clinic and community health settings. Participants completed validated PA self-report measures developed for international comparisons (International Physical Activity Questionnaire Short Form), characterization of diabetes self-care (Summary of Diabetes Self-Care Activities Measure; SDSCA) and exercise-related domains including provider recommendations and PA behaviors and barriers (Personal Diabetes Questionnaire; PDQ). Results: Self-reported PA and PA correlates differed by instrument. BMI was negatively correlated with PA level assessed by the PDQ in both genders, and assessed with SDSCA activity items in females. PA levels were low, comparable to previous research with community and diabetes samples. Pain was the most frequently reported barrier; females reported more frequent PA barriers overall. Conclusions: When using self-report PA measures for PA evaluation of adults with diabetes in clinical settings, it is critical to consider population and setting in selecting appropriate tools. PA barriers may be an important consideration when interpreting PA levels and developing interventions. Recommendations for incorporating these measures in clinical and research settings are discussed.
https://journals.humankinetics.com/search?f_0=author&q_0=Barbara+Stetson&print
What You Need to Know About Regionalizing Public Safety Responsibilities J. Scott Tiedemann, Jack Hughes and Todd Simonson are attorneys with Liebert Cassidy Whitmore, a labor and employment law firm representing public agency management. Tiedemann is a managing partner and can be reached at [email protected]. Hughes is a partner and can be reached at [email protected]. Simonson is an associate and can be reached at [email protected]. Many public safety agencies have implemented layoffs and furloughs due to the severe fiscal restraints of the current economic downturn. Nevertheless, at some point staff reductions can undermine public safety. Consequently some agencies are exploring alternate cost-saving measures. One option receiving increased attention is regionalizing public safety services by consolidating, contracting or sharing services with neighboring agencies. Regionalization can yield both short- and long-term cost savings through economies of scale. For example, agencies may spread overhead costs across larger operations, increase purchasing power to obtain better deals on equipment and services, and reduce the need for expensive capital improvement projects. Better still, service levels may improve; for instance, response times may be shortened by eliminating previous service-area boundaries. However, as the saying goes, anything worth having comes with a price. Regionalization is not easy and involves many challenges to consider and overcome. This article identifies some of the issues public agencies must address before making a final decision to regionalize services. Laying the Groundwork Conduct a long-term financial analysis. Without long-term, significant cost savings, regionalization is likely to be a non-starter. Request proposals from your county, district and neighboring agencies, and then have your Finance Department or a third-party consultant conduct a thorough financial analysis to determine the fiscal impact over five to 10 years. Conduct an initial legal assessment. In some jurisdictions, the local municipal code, city charter or a memorandum of understanding may severely limit — or even prohibit — consolidating, merging or sharing services with another agency. Your agency’s legal counsel should be able to determine whether such limitations exist and provide advice regarding ways to overcome them. Think incrementally. The neighboring agencies of Arroyo Grande, Grover Beach and the Oceano Community Services District incrementally implemented the consolidation of fire services (for more information, see ”Consolidating Fire Services: Arroyo Grande, Grover Beach and Oceano Community Services District Take a New Approach“). Incremental implementation mitigated concerns about the local loss of control and identity, which is a very real phenomenon that can threaten the success of a consolidation. Moreover, incremental steps allow time to build interpersonal relationships between the involved agencies. Mutual trust is critically important, especially in the public safety context where one employee may have to save another’s life. If possible, consider forming a finite joint training and/or mutual aid agreement first and evaluate the outcome before moving on to mergers of equipment, finances and employees. Involve stakeholders at the outset. The very idea of merging or contracting for services can trigger strong opposition from the public and employees. Consider including all the stakeholders at the outset to garner support, especially the most vocal critics. Many preliminary discussions will concern operational issues (not necessarily subject to the negotiation obligations discussed later), such as: Will staffing levels remain the same? Will employees remain on their beats? What will be the chain of command? Whose uniform and badges will be worn? How will employees be trained on new equipment? Will police officers be able to keep their firearms? If potential opponents believe that their opinions were considered early in the process, the journey toward regionalization can be much smoother. Bargaining Over the Decision and Its Impacts Regionalizing public safety services raises numerous legal issues. Collective bargaining over the terms and conditions of employment, if required in your jurisdiction, can be one of the most complicated components of regionalization. For example, in California under the Meyers-Milias-Brown Act, if reducing labor costs is the motivation for consolidating services, the decision is subject to negotiation with the affected labor union(s) before making a final decision.1 After deciding to consolidate, an agency must still bargain with represented employees over the impacts of the decision. The ultimate resolution of these bargaining processes is sometimes articulated in a collective bargaining agreement called a transitional memorandum of understanding (MOU) that, depending upon the bargaining unit and the complexity of the merger, can range from a relatively short document to a very involved one. The issues to be negotiated may include: Employment transition date. There should be a designated date and time when the agency lays off represented employees and the successor employer hires them. The document should specify that all of the former agency’s obligations will cease and that employees will no longer have any rights or privileges with the former employer other than those enumerated in the transitional MOU. Salary and rank placement. An employee’s salary and rank at the original and successor agencies will usually be comparable. If the original agency pays higher wages, reducing wages may be part of the savings produced by outsourcing. It is also possible to hold wages constant for transitioning employees at the higher level until the successor agency’s wage scale increases, a practice called “Y” rating. Benefits. The transition of pensions, medical benefits and retiree health benefits will depend upon the agreement of the parties involved. Differences in benefits between employers can be controversial. Compromises to bridge differences can be achieved using creative solutions like purchasing supplemental retiree health benefits, contributing to enhanced medical benefits or supplementing pension benefits. Accruals. Vacation and compensatory time-off accruals are an employee’s property and can be paid out immediately, converted into a savings account and/or used to buy accrual banks with the new employer. The conversion of sick leave accruals, if any, will depend upon the policies in place and the agreement of the parties involved. Background investigations. These may be required for police officers transitioning from one agency to another, prior to the date of transfer.2 Conducting portions of the background investigation after the officer is hired by the new agency may allow the officer the right to review adverse comments in the normally confidential background investigation.3 Before allowing the prospective employer to view an officer’s personnel records as part of the background investigation, the officer should sign a confidentiality waiver. Bargaining unit. Employees will typically transfer to the bargaining unit representing the comparable employees at the new agency. Probationary status. Depending on the agreement, employees transferring to the new agency may either be required to serve a probationary period or given “for cause” permanent status. Reasonable accommodations. A disability accommodation that is reasonable for one employer may not be reasonable for another. Employees should execute separate medical information waivers before the new employer obtains them. A disability retirement may be required if the employee cannot pass the physical or psychological exam. It is helpful to have an agreement between employers about who will bear that cost. Reinstatement. Employees may be guaranteed reinstatement with their former employer if the consolidation fails or the contract with the new agency ends without renewal. Conclusion Agencies searching for an alternative to reducing public safety staffing should carefully consider regionalization. With proper planning, discussion with all stakeholders and the sacrifice of personal interests in favor of the greater good, regionalizing public safety responsibilities can benefit everyone involved. Footnotes: Cal. Gov. Code §§ 3500, et seq.; Rialto Police Benefit Assn. v. City of Rialto (2007) 155 Cal.App. 4th 1295. See e.g. Cal. Gov. Code § 1031(d); Pitts v. City of Sacramento (2006) 138Cal.App.4th 853, 856, fn. 4. County of Riverside v. Superior Court (2002) 27 Cal.4th 793.
https://www.westerncity.com/article/what-you-need-know-about-regionalizing-public-safety-responsibilities
The cabin’s purpose is comfort and relaxation - comfy sofas, pool table, spacious accommodations, and a direct connection with the outdoors. In the late evening/early morning enjoy go outside on the dock to fish or sit and watch the traffic on the Arroyo. Great place to destress. *Due to the nature of the area, there are frequent internet and power outages. This newly renovated home is steps away from the water and affords spectacular views of the Laguna Madre and the Bird Sanctuary. The home is complete with a full kitchen, two bedrooms with queen and full beds, two full bathrooms and plush comfortable couches to relax in after a day on the water. All the amenities of home allow you to get away and relax without worrying about accommodations. You’ll love the peace and quiet. Couples, families, and big groups welcomed. FISHING CHARTERS AVAILABLE Over 150 ft. of sea wall and sits on over an acre of privacy. Features a 2 bedroom, 1 bath cottage accommodates 4 occupants comfortably. There is 1 queen size bed and twin sized bunk beds in the adjacent room. Includes a dining table with seating for 4 and a kitchenette with a full sized stove/oven and refrigerator. Pot, pans, dinnerware are stocked in cabinet for your fresh catch of the day dinner plans. Don’t forget your fishing rods to enjoy a guaranteed catch fresh out of the calm waters.
https://www.airbnb.ru/laguna-madre-tx/stays
Fanboy & Chum Chum is an American CGI animated television series produced by Frederator Studios. It is based on a first episode/short from Frederator's Random! Cartoons called "Fanboy". The series is created by Eric Robles and directed by Brian Sheesley and Jim Schumann. It premiered on November 6, 2009 on Nickelodeon after the SpongeBob SquarePants special "Truth or Square". A sneak peek of the show was shown on October 12, 2009. The series surrounds Fanboy and Chum Chum, two energetic "super fans" of science fiction and fantasy. They have super hero costumes with underwear outside the costumes and their world is full of comic adventures and misadventures. The series premiere drew 5.8 million viewers. The second episode was watched by 5.4 million viewers. Plot - → Main article: List of Fanboy and Chum Chum characters Animated short The animated short "Fanboy" aired as part of Frederator Studio's Random! Cartoons, and led to the creation of the animated series. Worldwide release |Country||Network||Premiere Date| |USA||Nickelodeon / Nick Too|| October 12, 2009 (sneak peek)| November 6, 2009 (series premiere) |NickToons||October 23, 2009| |Latin America||Nickelodeon||April 10, 2010| |Brazil||Nickelodeon||April 4, 2010| |Canada||YTV||September 1, 2010| |Nickelodeon||November 2, 2009| |Netherlands||Nickelodeon||April 5, 2010| |UK| Ireland |Nickelodeon|| February 16, 2010 (Sneak Preview)| April 2, 2010 (Series Premiere) Reception Reviews Critical reception was generally mixed while audience and fans reception was extremely negative. Aaron H. Bynum of Animation Insider called Fanboy & Chum Chum "a fun show that deserves a good look. The quality animation helps counterbalance the immense amount of dialogue from the series' chatty characters, and the sheer comedy of marginally competent comic-loving kids helps outweigh what might otherwise be a binge of geeky annoyance. But overall, Fanboy and Chum Chum is a lot of fun." Variety praised the series' "bright, energetic look and even an appealing premise in theory". David Hinckley of NY Daily News gave the series three stars out of five, and said that "it's good, but might not be the next SpongeBob". A negative review came from KJ Dell'Antonia of Slate who found the main characters irritating, and thought the whole concept was unoriginal, with "many tired jokes and not enough of hat kind of mild satire to make this play in our house". Ratings The series premiered on November 6, 2009, after the SpongeBob SquarePants special Truth or Square The broadcast ranked number three of cable programs that week and number two of the night. The premiere was watched by a total of 5.8 million viewers. The second episode was broadcast on November 7, 2009 and garnered 5.4 million viewers, ranking fifth of all cable broadcasts that week. The third episode was broadcast a week later, on November 14, 2009, with 3.8 million viewers. A broadcast on November 28, 2009 was viewed by 3.9 million viewers. In February 2010, the episode "Moppy Dearest" was viewed by 4.27 million viewers, an improvement over the last few episodes. Awards |Award||Category||Nominee||Result| |2010 Daytime Emmy Awards||Outstanding Directing in an Animated Program|| Jim Schumann| Brian Sheesley Ginny McSwain |Nominated| |Outstanding Individual Achievement in Animation|| Caesar Martinez| For "Chimp Chomp Chumps" |Nominated| |Outstanding Individual Achievement in Animation|| Steve Lambe| For "The Janitor Strikes Back" |Nominated| |Outstanding Achievement in Main Title and Graphic Design||Nominated| Gallery The topic of this page has a wiki of its own: Fanboy & Chum Chum Wiki.
https://nickelodeon.fandom.com/wiki/Fanboy_%26_Chum_Chum
Abstract This paper explores present understanding of the possible impacts that volcanic eruptions in Iceland might have had upon the environments and traditional farming systems of the Highlands and Islands of Scotland, before ‘the Clearances’ of the late 18th and 19th centuries AD . It reconstructs both the nature of the impacts and the character of the risks that might have been faced by subsistence communities within the historical period from such Icelandic volcanic eruptions, and as such serves to redirect a research emphasis that has previously been principally focused upon the European Bronze Age. The study also emphasizes that it is inadequate to envisage the impacts from volcanic aerosols as threats to the community to be understood solely along a continuum of environmental hazards. For example, in the historical period before the Clearances, the wider social, political and economic contexts of subsistence economies affected can be shown to have raised or lowered the thresholds at which environmental risks became real or were turned into subsistence crises. In times past, as now, the capacity of people to cope with such environmental vicissitudes would have varied according to a complex of pre-disposing factors, their recent experiences, attitudes and perceptions, political and social relationships, health and well-being (especially their susceptibility to respiratory problems), economy, education and memory, and general inventiveness and resilience. Unlike much earlier research, which focused upon the European Late Bronze Age and emphasized global climatic change and its regional-scale consequences, this account of more recent times emphasizes the small scale, the importance of local pre-disposition and contingency, and hence the likely patchiness and indeterminacy of consequences on the ground of distant volcanic eruptions. The paper concludes that in the historical past, for a variety of environmental, agricultural, social and political reasons, some communities in the Highlands and Islands would have already been typically at risk of a subsistence crisis one year in every four or five. Hence a particular group of people could have been at notable further risk if a significant quantity of volcanically derived noxious and toxic materials had fallen upon them. As a result, for both habitats and human populations in historical times, the consequences of an Icelandic volcanic eruption are likely to have varied from place to place and from time to time. This analysis also suggests that it is difficult to envisage that any postulated region-wide abandonment of settlement in the British Isles might be attributable, directly or indirectly, to the distal impacts of volcanic eruptions in Iceland.
https://pubs.geoscienceworld.org/georef/search-results?f_Authors=R.+A.+Dodgshon
At the same time, objects may absorb radiation and get hotter that way. Therefore thermal radiation provides a mechanism for exchanging heat between objects.Figure 1 shows the calculated power spectral density as a function of wavelength for different temperatures. Figure 2 shows the total thermal power as a function of temperature. Show ionizing radiation rings for: [ ? ] 100 rem (sickness, increased lifetime cancer risk) 500 rem (50-90% mortality without medical care) 600 rem (80% mortality with medical care) 1,000 rem.. THERMAL INFRARED (TIRS 2) in band 11 (11.50-12.51 um). Each band has a spatial resolution of If you were a goldfish, you would see light differently. A goldfish can see infrared radiation which is.. They radiate in all directions. The energy that radiates back toward Earth heats both the lower This absorption and radiation of heat by the atmosphere—the natural greenhouse effect—is beneficial for.. A thermal infrared image of a coffee cup filled with a hot liquid. Notice the rings of color showing heat traveling from the hot liquid through the metal cup. You can see this in the metal spoon as well THERMAL RADIATION meaning, definition, explanation & pronunciation If objects appear white (reflective in the visual spectrum), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared; e.g., most household radiators are painted white despite the fact that they have to be good thermal radiators. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence. The common household incandescent light bulb has a spectrum overlapping the black body spectra of the sun and the earth. A portion of the photons emitted by a tungsten light bulb filament at 3000 K are in the visible spectrum. However, most of the energy is associated with photons of longer wavelengths; these do not help a person see, but still transfer heat to the environment, as can be deduced empirically by observing a household incandescent light bulb. Whenever EM radiation is emitted and then absorbed, heat is transferred. This principle is used in microwave ovens, laser cutting, and RF hair removal.Thermal radiation is one of the principle mechanisms of heat transfer. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction. The interplay of energy exchange by thermal radiation is characterized by the following equation:where the constant of proportionality is the Stefan–Boltzmann constant and is the radiating surface area. Words. Radiation is emitted from a source, transmitted to a destination, where it can be absorbed. We can measure radiation intensity at any location as power per area: usually Watts per square.. thermal-radiation definition: Noun (uncountable) 1. (physics) The electromagnetic radiation emitted from a body as a consequence of its temperature; increasing the temperature of the body.. If a radiation-emitting object meets the physical characteristics of a black body in thermodynamic equilibrium, the radiation is called blackbody radiation. Planck's law describes the spectrum of blackbody radiation, which depends only on the object's temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Global greenhouse effect, methane, microbiological processes, atmosphere, efficiency of the Earth thermal radiation absorption. Метан — наиболее значимый пред- рос об управлении.. As the photon energy of visible light is well above the thermal energy at room temperature, thermal emission is usually not an issue in optics, including photodetection. This is different in radio technology, for example, where sensitive detectors can easily register thermal radiation. How are the terms Infrared radiation and Thermal radiation related? Infrared radiation noun - Electromagnetic radiation with wavelengths longer than visible light but shorter than radio waves Solar radiation and the greenhouse effect. Global warming isn't a new concept in science. As they absorb radiation and heat up, the oceans, land and atmosphere release heat in the form of IR.. Topology optimization of thermal fluid-structure systems using body-fitted meshes The coastal areas near thermal or nuclear plants are subject to hot water discharges produced by cooling processes The thermal radiation from metals is calculated by including the Holstein processes of phonon-assisted and surface-assisted scattering in the free-electron model Background radiation Background radiationRadiation that is always in the environment. The majority of background radiation occurs naturally and a small fraction comes from man-made elements.. Map info. Radiation up to 0.3uSv/h. Radiation above 0.8uSv. Reactor For obtaining a higher usable thermal power in the near infrared, for example, could in principle have an absorbing coating at one fiber end, which is kept at a high temperature, and use the other fiber end without a coating as the output. However, the applicable temperature would be limited with that technical approach. Instead, one can use the emission of a high temperature filament of a light bulb, imaging the filament to the fiber end. If the image of the hot filament completely covers the whole fiber core and the imaging optics are a lossless, one can obtain the same thermal power in the fiber as calculated with the equation above. Note that the temperature of the filament is the limiting factor. A more powerful lamp but without a hotter filament would not help; with that, one could not produce a higher intensity of thermal radiation at the fiber end. In other words, the radiance of a thermal source is fundamentally limited by the temperature. Thermal Reactive Armor with Fins-Array II Heatsink, Direct Touch Heatpipe II and NanoCarbon Showcasing a state-of-the-art design, excellent functionality, an impressive thermal design, next.. Thermal radiation is emitted by all matter with temperatures above the absolute zero and is transferred in the form of electromagnetic waves. The thermal radiation of an object is dependent on its emissivity By definition, a blackbody in thermal equilibrium has an emissivity of ε = 1.0. From its definition, a blackbody, which is an idealized physical body, absorbs all incident electromagnetic radiation.. The main exception to this is shiny metal surfaces, which have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft.The spatial coherence of light with thermal origin is also normally small, because it arises from uncoordinated (incoherent) emission from a substantial area. Higher spatial coherence can be obtained by using thermal emission from a small emitter. However, it will usually be hard to achieve a spatial coherence which is comparable to that of a laser. Define thermal radiation. thermal radiation synonyms, thermal radiation pronunciation, thermal radiation translation, English dictionary definition of thermal radiation For the original configuration with the black coatings on the fiber ends, we do not have modes, but rather thermal radiation which is smoothly spread over a large range of optical frequencies. However, the power spectral density of that radiation (in units of W/Hz) equals the average value for the fiber resonator and can thus be calculated as: Chapter 12 : Thermal Radiation Objectives : Classify electromagnetic radiation and identify thermal radiation. Understand the idealized blackbody and calculate the total and spectral blackbody.. thermal radiation isn't in the Cambridge Dictionary yet. You can help! Add a definition. Supersonic propagation of an ionization front in low density foam targets driven by thermal radiation We derive the thermodynamic limits of harvesting power from the outgoing thermal radiation from We derive this limit here using a setup obtained by interchanging the radiative transfer elements and.. By integrating Planck's formula over all optical frequencies and spatial directions, one can derive the Stefan–Boltzmann law for the total emitted optical power of a blackbody per unit area: Thermal radiation is electromagnetic radiation in the infra-red region of the electromagnetic By definition, a blackbody in thermal equilibrium has an emissivity of ε = 1.0. Real objects do not.. Comparison Chart. Definition. Key Differences. Conclusion. Temperature is defined as the average kinetic energy of all molecules together, i.e. average energy of all the particles in an object However, this value can easily change for different circumstances and different equations should be used on a case per case basis. Thermal energy is created from the vibration of atoms and molecules within substances. The faster they move, the more energy they possess and the hotter they become. Thermal energy is also called.. ..receiving a thermal radiation, terms related to a semi-transparent medium receiving a thermal radiation, and their definition. Thermal performance and energy use in the built environment UV radiation can cause skin irritation and damage your eyes. Cleaning your hands with alcohol-based hand rub or washing your hands with soap and water are the most effective ways to remove the virus Sorry, we're unable to complete your request. We cannot complete your request due to a technical difficulty. You may return to the previous page or go to the homepage and explore other options Definition of THERMAL RADIATION in the Definitions.net dictionary. Information and translations of THERMAL RADIATION in the most comprehensive dictionary definitions resource on the web The situation with thermal radiation in free space is somewhat more complicated, since there is a large number of modes involved, and the mode density depends on the optical frequency. Integrating the above equation over the power output given by the Stefan–Boltzmann law is obtained, as:Calculation of radiative heat transfer between groups of object, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the Radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors, which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy, boiler and furnace design and raytraced computer graphics. Definition: electromagnetic radiation which results from thermal emission. Thermal radiation is well known from light bulbs and from the sun, for example Thermal radiation modeling software available for use with Thermal Desktop or stand alone. RadCAD® computes radiation exchange factors within the thermal model and with the environment.. . Definition of thermal radiation in English Dictionary Перевод слова radiation, американское и британское произношение, транскрипция, словосочетания, однокоренные слова, примеры использования Here, represents the spectral absorption component, spectral reflection component and the spectral transmission component. These elements are a function of the wavelength () of the electromagnetic radiation. The spectral absorption is equal to the emissivity ; this relation is known as Kirchhoff's law of thermal radiation. An object is called a black body if, for all frequencies, the following formula applies: Electromagnetic radiation from the Sun is absorbed by the soil and bodies of water, thus heating them. This is especially true when the angle of the sunlight is fairly perpendicular to the ground, as in the.. Thermal radiation sounds like one a geeky term you'd see on a physics test. Everything in the universe radiates heat. Some things radiate much MORE heat than others Black Hole Definition. Black holes are generally defined as a place in space where gravity pulls so much that even light cannot get out. The gravity is so strong because [the] matter has been squeezed.. A recent article in the Photonics Spotlight discusses the responsibilities of supervisors (professors and senior research assistance). Radiation can only be detected by radiation detection instruments. Unlike incidents involving radiation exposure, these accidents are familiar and understandable Unlike conductive and convective forms of heat transfer, thermal radiation can be concentrated in a tiny spot by using reflecting mirrors. Concentrating solar power takes advantage of this fact. In many such systems, mirrors are employed to concentrate sunlight into a smaller area. In lieu of mirrors, Fresnel lenses can also be used to concentrate heat flux. Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to 285 °C (558.15 K) or 545 °F.. As a theoretical example, we can take a single-mode fiber, for simplicity assuming that it is lossless and single-mode at any wavelength. Here, we actually have two modes per propagation direction, corresponding to different polarization directions, but in the following we can consider radiation only for one of those directions. Also, we assume that both ends of some length of such a fiber are fully covered with black coatings, and that the whole setup is kept at a certain absolute temperature T. In thermal equilibrium, some amount of thermal radiation will travel in the fiber in both directions. . voir la définition de Wikipedia. Thermal radiation is electromagnetic radiation generated by the thermal motion of charged particles in matter Thermal radiation is electromagnetic radiation generated by the thermal motion of charged particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation Lighter colors and also whites and metallic substances absorb less illuminating light, and thus heat up less; but otherwise color makes small difference as regards heat transfer between an object at everyday temperatures and its surroundings, since the dominant emitted wavelengths are nowhere near the visible spectrum, but rather in the far infrared. Emissivities at those wavelengths have little to do with visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit. All cell towers emit Radio Frequency (RF) Radiation. There are literally hundreds of peer reviewed scientific studies from around the world that have linked with the photon energy hν corresponding to the mode frequency. For optical frequencies in the near infrared, for example, and temperatures around room temperature, the photon energy is well above the thermal energy kBT, so that the average energy in the mode is far below the photon energy. There is then sometimes one thermal photon in the mode, but at most times no photon at all. In the contrary situation of very low optical frequencies (in the far infrared) or a high temperatures, where the thermal energy kBT is well above the photon energy, the mean energy would be approximately kBT – now independent of the optical frequency. That is actually the value which would be expected based on classical physics, ignoring the quantum nature of light. Thermal radiation Thermal radiation is electromagnetic radiation emitted from the surface of an object which Thermal radiation, even at a single temperature, occurs at a wide range of frequencies Review and cite THERMAL RADIATION protocol, troubleshooting and other methodology information | Contact experts in THERMAL RADIATION to get answers Synonyms for thermal radiation at Thesaurus.com with free online thesaurus, antonyms, and definitions. The reaction to thermal radiation is thus similar to that of light Any object emits some amount of electromagnetic radiation of thermal nature, which is called thermal radiation or sometimes heat radiation. This means that some of the thermal energy is converted into electromagnetic radiation energy. Only at absolute zero temperature, which can never be exactly reached, that thermal radiation would vanish.Before Planck, Rayleigh–Jeans law had been developed based on classical physics, i.e., not taking into account quantum effects. This is what ones approximately obtains from Planck's law in the regime hν ≪ kBT, where the second term in the equation for Le,Ω,ν is approximately kBT. The divergence for infinite optical frequencies (“ultraviolet catastrophe”) was an obvious concern, and was fixed by the introduction of Planck's law. Wien approximation corresponds to the opposite limit for Planck's law (hν ≫ kBT). Besides Thermal Radiation Math Model, TRMM has other meanings. They are listed on the left If you are visiting our English version, and want to see definitions of Thermal Radiation Math Model in.. Thermal radiation is of paramount importance for heat transfer in spacecraft because the external Heat transfer and thermal radiation modelling. page 4. (important not only to people but for fire.. . Also known as true solar day. Direct solar radiation: That portion of the radiant.. Thermal radiation mainly in the range of wavelength between (0.1um, 100um.) • In the rang of (0.4um, 0.70um) human can see it. Electromagnetic radiation in this range of wavelengths is called visible.. The thermal energy radiated by a blackbody radiator per second per unit area is proportional to the The Stefan-Boltzmann relationship is also related to the energy density in the radiation in a given.. Definition of thermal radiation. What is the meaning of thermal radiation in various languages Thermal radiation — is electromagnetic radiation emitted from the surface of an object which is thermal radiation — šiluminė spinduliuotė statusas T sritis fizika atitikmenys: angl. heat radiation.. Thermal radiation definition, electromagnetic radiation emitted by all matter above a temperature of absolute zero because of the thermal motion of atomic particles. See more Covering geo-political news and current affairs across Asia Asia Times is a pan-Asia online news platform covering politics, economics, business and culture from an Asian perspective. It is one of the.. Thermal radiation, process by which energy, in the form of electromagnetic radiation, is emitted by a heated surface in all directions and travels directly to its point of absorption at the speed of light.. A selective surface can be used when energy is being extracted from the sun. For instance, when a green house is made, most of the roof and walls are made out of glass. Glass is transparent in the visible (approximately 0.4 µm<λ<0.8 µm) and near-infrared wavelengths, but opaque to mid- to far-wavelength infrared (approximately λ>3 µm). Therefore glass lets in radiation in the visible range, allowing us to be able to see through it, but doesn’t let out radiation that is emitted from objects at or close to room temperature. This traps what we feel as heat. This is known as the greenhouse effect and can be observed by getting into a car that has been sitting in the sun. Selective surfaces can also be used on solar collectors. We can find out how much help a selective surface coating is by looking at the equilibrium temperature of a plate that is being heated through solar radiation. If the plate is receiving a solar irradiation of 1350 W/m2 (minimum is 1325 W/m2on July 4 and maximum is 1418 W/m2on January 3) from the sun the temperature of the plate where the radiation leaving is equal to the radiation being received by the plate is 393 K (248 °F). If the plate has a selective surface with an emissivity of 0.9 and a cut off wavelength of 2.0 µm, the equilibrium temperature is approximately 1250 K (1790 °F). Note that the calculations were made neglecting convective heat transfer and neglecting the solar irradiation absorbed in the clouds/atmosphere for simplicity, however, the theory is still the same for an actual problem. If we have a surface, such as a glass window, with which we would like to reduce the heat transfer from, a clear reflective film with a low emissivity coating can be placed on the interior of the wall. “Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow”. By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window. thermal-radiation definition: Noun (uncountable) 1. (physics) The electromagnetic radiation emitted from a body as a consequence of its temperature; increasing the temperature of the body.. Thermal Radiation is a standard power set that is a primary set for Defenders and a secondary set for Controllers, Corruptors, and Masterminds. You have the ability to control heat and Thermal Radiation. This allows you to protect allies from harm, heal them or increase their abilities with the Stefan–Boltzmann constant σ ≈ 5.6704 · 10−8 W m−2 K−4. Note that here the power is proportional to T4 while it is proportional to T2 in the case of emission into a single mode. This is because the number of available free-space modes (in three dimensions) scales with the square of the energy, while the mode density in the single-mode case is constant. 764 engineering heat and mass transfer thermal radiation : properties and processes Net radiative heat transfer rate from the surface = Total energy leaving the surface.. Geoscientists Help Map the Pandemic. Data visualization and mapping are valuable tools in the fight against COVID-19. Geoscientists can help healthcare workers and shape public policy The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency of light is inversely proportional to its wavelength in vacuum, mean that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the Sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though some radiation escapes into space, it is absorbed and subsequently re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general. Radiation and irradiation describe processes of transferring energy to and from an object. Ionizing radiation refers to types of radiation that can cause ionizations, as it passes through a material Thermal radiation is electromagnetic radiation generated by the thermal motion of particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation Because thermal radiation is a type of electromagnetic radiation, not only does it transfer heat, but it also gives off light (although the light is not always in the visible range). Examples of radiation are.. Sunlight is thermal radiation generated by the hot plasma of the Sun. The Earth also emits thermal radiation, but at a much lower intensity and different spectral distribution (infrared rather than visible) because it is cooler. The Earth's absorption of solar radiation, followed by its outgoing thermal radiation are the two most important processes that determine the temperature and climate of the Earth. For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains as a factor:In engineering, thermal radiation is considered one of the fundamental methods of heat transfer, although a physicist would likely consider energy transfer through thermal radiation a case of one system performing work on another via electromagnetic radiation, and say that heat is a transfer of energy that does no work. The difference is strictly one of nomenclature. Energy can be transferred by conduction, convection and radiation. Insulation is used to stop heat energy transfers from buildings and the human body Radiation. Part Five. Optics. 5.7. Thermal Radiation. Quantum Nature of Light. Part Six The Growth Mechanism, Thermal Stability, and Reactivity of Palladium Mono.. For calculating that amount of power, we can imagine that we suddenly replace the black coatings with highly reflecting coatings, so that we now have an optical resonator, but we still have the same amount of thermal radiation inside the fiber which has previously been captured. (That amount of light will also not change thereafter; the system remains in thermal equilibrium.) Each mode of the fiber resonator will on average have an energy which is determined by Bose–Einstein statistics:In a practical situation and room-temperature setting, humans lose considerable energy due to thermal radiation. However, the energy lost by emitting infrared light is partially regained by absorbing the heat flow due to conduction from surrounding objects, and the remainder resulting from generated heat through metabolism. Human skin has an emissivity of very close to 1.0 . Using the formulas below shows a human, having roughly 2 square meter in surface area, and a temperature of about 307 K, continuously radiates approximately 1000 watts. However, if people are indoors, surrounded by surfaces at 296 K, they receive back about 900 watts from the wall, ceiling, and other surroundings, so the net loss is only about 100 watts. These heat transfer estimates are highly dependent on extrinsic variables, such as wearing clothes, i.e. decreasing total thermal circuit conductivity, therefore reducing total output heat flux. Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is virtually impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get you close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most STP lab controlled environments). Radiation emitted by a body is a consequence of thermal agitation of its composing molecules. The black body is defined as a body that absorbs all radiation that falls on its surface If the fiber were lossless and single-mode for any optical frequencies, the result could be integrated over all frequencies to obtain the total thermal power Thermal radiation (infrared heat energy) is given off by Earth and the various things on it. This change from one type of radiation to another occurs due to the temperature of the medium emitting.. Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest. Infrared radiation is one type that occurs within the electromagnetic spectrum; other types include visible light, x-rays, and microwaves. It is based on the concept of blackbody radiation, which means.. The optical frequency with maximum information according to Planck's law is proportional to the temperature. Note that a similar equation can be derived for the spectral density in terms of wavelengths (Wien's displacement law), and the maximum of that function does not correspond to the mentioned optical frequency, since the conversion from frequency to wavelength intervals involves another wavelength-dependent factor. A leading activist on the issue of electromagnetic radiation and its negative impacts on public health has described the rollout of 5G as a massive health experiment which could become a global.. Thermal radiation is one of three ways that thermal energy can be transferred. You might be surprised to learn that everything radiates thermal energy, not just really hot things such as the sun.. If an object is not a blackbody, on could in principle multiply the result with the emissivity. This, however, works only if the emissivity is frequency-independent – which it is usually not.Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. It represents a conversion of thermal energy into electromagnetic energy. Thermal energy results in kinetic energy in the random movements of atoms and molecules in matter. All matter with a temperature by definition is composed of particles which have kinetic energy, and which interact with each other. These atoms and molecules are composed of charged particles, i.e., protons and electrons and kinetic interactions among matter particles result in charge-acceleration and dipole-oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body through its surface boundary. Electromagnetic radiation, or light, does not require the presence of matter to propagate and travels in the vacuum of space infinitely far if unobstructed. Aires offers EMF radiation protection devices that include radiation blocking products for cell phone, microwave, computer and other electronic devices Infrared thermometer uses thermal radiation that objects emit to infer temperature. This means that they can tell temperature from a distance. Some call them 'temperature guns' as most of the infrared.. Heat radiation (as opposed to particle radiation) is the transfer of internal energy in the form of For most bodies on the Earth, this radiation lies in the infrared region of the electromagnetic spectrum When the thermal radiation (or heat) arrives from the sun, some of it is bounced from the surface of the Earth by the ozone layer (which is the reason we can safely walk out in the sun, is this prevents.. thermal radiation. (*) 1. The heat and light produced by a nuclear explosion. 2. (DOD only) Electromagnetic radiations emitted from a heat or light source as a consequence of its temperature; it.. thermal radiation (uncountable). (physics) The electromagnetic radiation emitted from a body as a consequence of its temperature; increasing the temperature of the body increases the amount of radiation produced, and shifts it to shorter wavelengths (higher frequencies).. Reflectivity: Effective Spectral Range: Thermal Stability: Laser Damage Threshold Property Density: Water Permeability: Hardness: Thermal Stability: Coefficient of Linear Expansion: Vacuum Stability.. The characteristics of thermal radiation depends on various properties of the surface it is emanating from, including its temperature, its spectral absorptivity and spectral emissive power, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e. it does not consist of just a single frequency, but comprises a continuous dispersion of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so that a black body has an emissivity of unity. Because a non-contact thermometer reads thermal radiation, it measures the object's surface temperature rather than the internal temperature Thermal radiation meaning and example sentences with thermal radiation. Top definition is '(DoD) Electromagnetic radiations emitted from a heat or light source as a consequence of its temperature.. IEEE Xplore. Delivering full text access to the world's highest quality technical literature in engineering and technology where neff is the effective refractive index of the fiber and L the fiber length. The mean optical power circulating in one mode of the resonator is the mean energy divided by the round-trip time 2 neff L / c, which is the inverse mode frequency spacing.
https://kennt-allein.com/en/encyclopedia/Radiant_heaty6zvta3420rp-.html
Watch News Channel 3 Live Watch ABC News Live Register to Vote ‘Extreme’ exoplanet found orbiting hot blue star Taking a second glance in space can be rewarding, especially when a closer look revealsa weird exoplanet. An ultra-hot exoplanet with searingtemperatures that can turn iron from a solid to a gashas been detected orbiting a star 322 light-years away in the Libra constellation. The observation was made by the European Space Agency’s Cheops mission, or Characterising Exoplanet Satellite. This celestial body is the first finding of the mission, which launched in December 2019. Although the planet was first detected in 2018, Cheops has providedmore details about this strange world. Cheops has a unique mission in that it follows up on stars known to host exoplanets and characterizes any planets around them. The planet, called WASP-189 b, is considered to be one of the hottest and most extreme exoplanets ever discovered. Exoplanets are planets outside of our solar system. WASP-189 b is what scientists call an “ultra-hot Jupiter.” While the exoplanet is a gas giant, similar to Jupiter in our solar system, it’s much hotter because it orbits very close to its host star. This planet completes an orbit around its star every 2.7 Earth days. What’s more, the planet is20 times closer to its star than Earth is to the sun. Meanwhile, the star is larger than our sun and more than 2,000 degrees hotter. This causes the star to look blue, rather than yellow or white. “Only a handful of planets are known to exist around stars this hot, and this system is by far the brightest,” said Monika senior researcher at the University of Geneva and lead author of the new study, in a statement. “WASP-189 b is also the brightest hot Jupiter that we can observe as it passes in front of or behind its star, making the whole system really intriguing.” The quest for light Cheops observes nearby stars to search for exoplanets around them. It can measure changes in light as planets orbit their stars with exceptional precision, which can help reveal details about the planets. Researchers were able to capture an occultation and a transit of the planet and the star using Cheops. During an occultation, the planet passes behind the star. A transit occurs when the planet passes in front of the star. These indirect methods of detecting the planet help the researchers learn more about it since they can’t actually observe the planet itself. “As the planet is so bright, there is actually a noticeable dip in the light we see coming from the system as it briefly slips out of view,” Lendl said. “We used this to measure the planet’s brightness and constrain its temperature to a scorching 3200 degrees C (5792 degrees Fahrenheit).” At this temperature, metals vaporize — so you wouldn’t want to live on WASP-189 b. During the transit, the researchers were also able to determine that the planet was 1.6 times the radius of Jupiter. Observations of this system revealed aspects of the star as well. The star, called HD 133112, is one of the hottest stars known to host a planet. “We also saw that the star itself is interesting — it’s not perfectly round, but larger and cooler at its equator than at the poles, making the poles of the star appear brighter,” Lendl said. “It’s spinning around so fast that it’s being pulled outwards at its equator! Adding to this asymmetry is the fact that WASP-189 b’s orbit is inclined; it doesn’t travel around the equator, but passes close to the star’s poles.” Planets typically have these tilted orbits when they form far away from the star and are later pushed closer to it. This migration can happen if multiple planets exist in the system and push each other around with their gravitational influence.Or it could be due to the forces of another star. So it’s likely that the planet experienced one of these scenarios. This planet is tidally locked with its star, meaning one side always faces the star and the other side of the planet always faces away. “They have a permanent day side, which is always exposed to the light of the star, and, accordingly, a permanent night side,” Lendl said. “This object is one of the most extreme planets we know so far.” More exoplanets on the horizon “This first result from Cheops is hugely exciting: it is early definitive evidence that the mission is living up to its promise in terms of precision and performance,” said Kate Isaak, Cheops project scientist at ESA, in a statement. Cheops is the ESA’s first mission dedicated to characterizing known exoplanets. The mission is a collaboration between ESA and Switzerland, and the mission’s Science Operations Center is operated from the University of Geneva’s observatory. “Cheops has a unique ‘follow-up’ role to play in studying such exoplanets,” Isaak said. “It will search for transits of planets that have been discovered from the ground, and, where possible, will more precisely measure the sizes of planets already known to transit their host stars. By tracking exoplanets on their orbits with Cheops, we can make a first-step characterisation of their atmospheres and determine the presence and properties of any clouds present.” Over the next few years, Cheops will detect and characterize exoplanets and could identify targets for future missions that could study their atmospheres. “Cheops will not only deepen our understanding of exoplanets, but also that of our own planet, Solar System, and the wider cosmic environment,” Isaak said.
One of the unfortunate consequences of political rhetoric is the purposeful brandmarking of the dialogue; controlling the language of the discourse may, as much as substance, determine the conclusions drawn by the target audience. Political strategists understand that the language used in public discourse can be leveraged to autonomically trigger appropriate evocative images and responses within the audience. Typically the strategists attempt to reduce complex concepts to ‘buzz words’ that effectively brand their position and elicit the preferred emotional response in the target audience. The message ‘feels right’ to the audience, regardless of the evidence, logic, intellectual examination, or facts for the position (‘truthiness’ as coined by the American television comedian Stephen Colbert). With respect to the vulnerability and security of digital information and networks, the language of the discourse within the European Union and the United States is clearly subject to different branding strategies and different standards of truthiness. In February 2013, the European Commission, in conjunction with the High Representative of the Union for Foreign Affairs and Security Policy, issued a cyber security strategy and the Commission proposed a new directive with respect to network and information security titled ‘An Open, Safe and Secure Cyberspace’. Based upon a series of five strategic priorities, the policy addresses cyber resilience, reducing cyber crime, developing cyber defence, mobilising industrial and technological resources, and establishing a coherent policy that promotes core EU values. The focus on the priorities appears to be promoting individual rights through establishing a consistent network and security strategy across member states, creating a cooperative mechanism to share early warnings for risks and incidents, and encouraging the private providers of infrastructure and the basic enablers of digital information services (cloud computing providers, internet payment providers, social networks, etc.) to adopt best practices risk management. The language of the policy clearly recognises threats to network and information security, but brands the discussion in terms of the needs and rights of the individual. Within a week of the Commission’s release of the proposed new directive, President Barack Obama issued an Executive Order (EO) entitled ‘Improving Critical Infrastructure Cybersecurity’ (after a stalemate in US legislative attempts in this area). As opposed to the Commission approach, the EO begins with alarmist language that there have been “repeated cyber intrusions into critical infrastructure” and contends that cyber threats to critical infrastructure continue to be “the most serious national security challenges we must confront”. With that language, the US has branded the dialogue for cyber security in terms of “critical infrastructure” rather than individual rights and “a national security challenge” rather than an economic infrastructure issue. Pursuant to the EO, the Director of the National Institute of Standards and Technology is to produce a standard framework to reduce cyber risks for critical US infrastructure (the ‘Framework’) and, while private industry participation is largely voluntary, the Framework requires “guidance for measuring the performance of any affected entity” in implementing the Framework (explicitly mandating very public disclosures of failures of private entities to adopt the preferred government positions established by the Framework). The US approach vests an officer of the Department of Homeland Security with oversight of privacy and civil liberties related to implementation of the EO and suggests (in Section 7(c) of the Framework) that the Framework merely include provisions to “mitigate” negative impacts of the Framework on individual privacy and civil liberties. Interestingly enough, the EO comes only a few months after President Obama signed another executive order expanding the US military’s authority to carry out cyber attacks (terming them “defensive actions”) and the US Defense Secretary Leon Panetta (in rhetoric reminiscent of the “axis of evil” designation by President George W. Bush in his State of the Union Address on 29 January 2002 with respect to weapons of mass destruction in Iran, Iraq and North Korea) warning that the three potential US adversaries (Russia, China and Iran) are systematically developing cyber threat capabilities. Certainly recent reports of cyber intrusions, especially intrusions under some level of control of, or financed by, foreign governments, are cause for concern and require a coordinated response to prevent the deterioration of digital commerce, digital confidence, and digital infrastructure supporting common needs across cultures. In February 2013, an American data company, Mandiant, reported tracking 141 cyber attacks performed by the same Chinese hacker group since 2006, 115 against US corporations, stating that the attacks were sponsored by the Chinese government and indicating that these cyber attacks were “just the tip of the iceberg … on average the US is subjected to at least 140 attacks per day”. Prior reports had indicated the involvement of the US in joint venture cyber attacks with Israel against Iran (using the Stuxnet virus to cripple Iran’s nuclear enrichment efforts). Such efforts have been labeled ‘sixth generation warfare’ when perpetrated by states or sub-state entities. Against these public reports of directed attacks at the state or sub-state level, there are innumerable reported and unreported attacks every day from criminals and criminal enterprises trying to steal personal financial or identity information and coordinated attempts to obtain proprietary and competitively significant digital information from private companies in what has been denominated as ‘economic warfare’ (but which is merely traditional corporate espionage for digital information, rebranded). However, there is a clear distinction between state and sub-state sponsored cyber warfare and mere criminal activity. In the EU this distinction with respect to ‘cyber invasions’ is delineated, linguistically, in terms of cyber resilience and reducing cyber crime on one hand, and cyber defence on the other. While the results of the two cyber invasions may be a similar security breach, these cyber invasions are inherently of a different nature and mandate a different perspective and response. The new EO may blur this linguistic distinction in the US, with Homeland Security now being responsible for creating policies and scorecards for the cyber security of private industry regardless of the source of the threat. Maintenance of infrastructure, as opposed to the priority of individual rights or source of the threat, appears to be the principle focus of the new US domestic policy. While the EU has branded cyber security in terms of maintaining individual rights and access to digital capabilities, the US had branded cyber security in terms of national defence (with all of the ancillary emotional responses ingrained in the US electorate since 9/11). Kelly L. Frey is a member at Dickinson Wright PLLC. He can be contacted on +1 (615) 620 1730 or by email: [email protected].
https://www.financierworldwide.com/cyber-warfare-cyber-terrorism-and-cyber-crime
Creating a more resilient and sustainable future will require Europe to reimagine its economic model, according to a new European Environment Agency (EEA) report published this week. The report highlights opportunities for Europe to go further in creating an economy that can deliver prosperity and sustainability. Creating a resilient economy within environmental limits Reducing society’s dependence on economic growth will be a key part of this transition towards a more resilient and sustainable future, according to the EEA report ‘Reflecting on green growth’. The idea that European economies and societies need to develop within environmental limits is at the heart of EU policy. The EU’s flagship strategic roadmap, the European Green Deal, sets out an ambitious agenda for transforming Europe’s systems of production and consumption so that they can deliver continued economic growth while protecting ecosystems. The European Green Deal’s focus on promoting economic growth is easy to understand. Societies rely on growth to sustain employment levels, increase living standards and generate the tax revenues to finance the welfare state, public debt and the investments needed to achieve sustainability transitions. Is unending economic growth possible? Nevertheless, there are doubts about whether unending economic growth is possible, given nature’s finite capacity to provide resources and absorb pollution. Globally, economic activities are already causing extensive environmental damage, necessitating an unprecedented decoupling of economic growth from environmental pressures. Whether decoupling at this scale is achievable is not certain. In addition, Europe faces other downward pressures on economic growth in coming decades, ranging from population ageing to growing risks of pandemics and climate change impacts. These uncertainties do not mean that Europe should abandon its green growth approach. The European Green Deal’s transformative agenda is essential, and it is important to make it the greatest possible success. But in building on the European Green Deal and promoting resilience, Europe should also seek to transform its economy in ways that enable it to secure society’s well-being even if GDP is contracting. Necessary actions include changes to fiscal systems, as well as more far-reaching measures to reorient economic activity at all scales, from local innovators up to multinational corporations. Rewiring of financial flows will be essential, as well as new knowledge systems that enable thinking and action at the pace and scale needed. The seeds for this transformation are already emerging in policy and practice, for example in the EU’s sustainable finance agenda. Europe needs to build on these foundations and take them much further and faster.
https://energyindemand.com/2021/12/02/new-eea-report-on-creating-an-economy-that-can-deliver-prosperity-and-sustainability/
NEW YORK, NY – Crohn’s disease and ulcerative colitis, also known as inflammatory bowel disease (IBD), affect millions of Americans and can significantly impair patient quality of life. Currently, there are no cures for these debilitating diseases and treatment often includes the use of medications to resolve symptoms (known as symptomatic remission) and inflammation in the gastrointestinal (GI) tract as seen during endoscopy (referred to as endoscopic remission). However, even patients who adhere to their medication regimen may continue to experience symptom flares or persistent inflammation. Moreover, many patients, despite experiencing no symptoms, may continue to have inflammation in their GI tract which causes bowel damage. Because of this, clinicians are utilizing a treatment approach called treat-to-target. Treat-to-target involves closely monitoring inflammation, and adjusting medications to heal the intestines more effectively, with the hope of helping patients achieve better long-term outcomes. The Patient-Centered Outcomes Research Institute (PCORI) recently issued a funding award for $6.4 million to study a treatment strategy, which may result in better longer-term patient outcomes, among patients enrolled in IBD Qorus™, the adult quality improvement initiative of the Crohn’s & Colitis Foundation. Through this clinical trial, Drs. Siddharth Singh from the University of California San Diego and Jason Hou from Baylor College of Medicine and the Michael E. DeBakey VA Medical Center in Houston, TX, leaders in IBD Qorus, will examine whether aggressively treating intestinal inflammation in patients who may not be experiencing any IBD symptoms is beneficial or may be harmful. Patients will be assigned either to continue their current treatment plan or, following a treat-to-target strategy, switch medication with the goal of healing the inflammation in their GI tract, in addition to controlling symptoms. “While clinical trials have shown benefit in resolving inflammation from a treat-to-target strategy, the pragmatic implementation of this approach and its impact on patient-centered outcomes and healthcare utilization is unknown,” said Dr. Singh. Conducting this research in real-world practices through the Foundation’s IBD Qorus program will provide new insights and learnings. “We are eager to begin as this research will provide much needed guidance to the IBD community and could significantly impact the long-term well-being for IBD patients everywhere,” said Dr. Hou. IBD Qorus is the first-ever adult IBD learning health system that is focused on improving health outcomes and how care is delivered to patients across a range of U.S. geographic locations, practice settings, and disease complexities. More than 55 GI practices across the nation are currently participating in this program serving over 20,000 patients. This study was selected for PCORI funding through a highly competitive review process in which patients, clinicians and other stakeholders joined clinical scientists to evaluate the proposals. Applications were assessed for scientific merit, how well they will engage patients and other stakeholders and their methodological rigor among other criteria. “This study was selected for PCORI funding not only for its scientific merit and commitment to engaging patients and stakeholders in research, but also for its potential to fill an important evidence gap and give people information to help them better assess their care options,” said PCORI Executive Director Nakela L. Cook, MD, MPH. “We look forward to following the study’s progress and working with the Crohn’s & Colitis Foundation to share the results.” This funding award has been approved pending completion of a business and programmatic review by PCORI staff and issuance of a formal award contract. PCORI is an independent, nonprofit organization authorized by Congress in 2010. Its mission is to fund research that will provide patients, their caregivers and clinicians with the evidence-based information needed to make better-informed healthcare decisions. For more information about PCORI’s funding, visit www.pcori.org. Crohn's & Colitis Foundation The Crohn's & Colitis Foundation is a non-profit, volunteer-fueled organization dedicated to finding the cures for Crohn's disease and ulcerative colitis, and to improving the quality of life of children and adults affected by these diseases. It was founded in 1967 by Irwin M. and Suzanne Rosenthal, William D. and Shelby Modell, and Henry D. Janowitz, M.D.
https://www.crohnscolitisfoundation.org/researchers-receive-funding-award-to-compare-targeted-treatment-approaches-to-improve-long-term
Television help to develop many cultural norms that societies experience in everyday life. In the 1970’s viewers are introduced to a revolutionary change that became popularized and broadcast in most American homes. That type of television discusses civil issues that focus on topics that influence media dissimilarities such as racism, poverty, sexuality to sexism. These particular television shows pave the way for any show that one can view today that exudes diversity. Family Situational Comedies introduce an interesting, unique and unbiased point of view that presented the lives of different families you could actual find in America who weren’t perfect and face real struggles. Satire exposes and criticizes errors of an individual or a society by using irony, exaggeration, or ridicule to expose its stupidity or shortcomings. These comedies are important because they shed light on serious topics that would otherwise be too uncomfortable to talk about. Two television shows that exhibit these characteristics well are Larry David’s, Curb your enthusiasm, and Stephen Colbert’s, The Colbert Report. Satire in television shows can be used to entertain and inform by getting personal to connect with the audience, provoking meaningful thought, and make fun of the absurd. Introduction In this case study, it analyse how the concept of family has changed in the past 20 years as it will be depicting modern family forms and past norms. It is important to look at how families have developed throughout the years up until the 21st century as we compare the two and elaborate on the difference and what makes it so significant. In this case study, it contrast and compare the television series Modern family which is a 21st century concept of family and The Simpsons which was adapted 27 years ago and how things have changed with family dynamics and what is the norm now which was not the norm years ago. Television situational comedies have the ability to represent different values or concerns of their audience, these values often change every decade or so to reflect and highlight the changes that the audience is experiencing within society, at the time of production. Between the years of 1950 and 2010, the representation of gender roles and family structure has been addressed and featured in various sitcoms, such as “Father Knows Best” and “Modern Family”, through the use of narrative conventions, symbolic, audio and technical codes. These representations have transformed over time to reflect the changes in social, political, and historical contexts. The 1950’s sitcom “Father Knows Best” traditionally represents the values of gender roles and family structure in a 1950’society, with the father, held high as the breadwinner of the family and the mother as the sole homemaker. Option 2: “Family Guy” Keniesha Lake SOCI 1010-C21 Murphy University of Memphis There are many ways to show the world your ideas, and the main way people tend to go about it is using different forms of media. Media is all the print, digital, and electronic means of communication” (OpenStax College 2015). The most used form of media is television. You can use television to find out the news, watch sports, and be entertained. The form of media I am using for this paper is the popular comedy show “Family Guy”. Challenging Stereotypes: How “Modern” Is Modern Family? The show won the Emmy Award for Outstanding Comedy Series in each of its first five years and the Emmy Award for Outstanding Supporting Actor in a Comedy Series four times. If you have never heard about “Modern Family," you have never seen comedy. Modern Family is an American television show that portrays the ‘Modernism’ in families nowadays in America. n Barbara Ehrenreich’s The Worst Years of Our Lives, she highlights a significant infection festering in American Culture: television as a main event, or only event in a day. As she says “you never see people watching tv”, and that happens because it truly isn’t entertaining. It substitutes for a life. The television has been pulling people into an allusion of a false reality and a seemingly boring life since its implementation. She essentially illustrates the negative impact television has on todays society. The show Family Guy portrays a middle-class family, which has a stay-at-home mother (Lois), a working father (Peter), two children in school (Meg and Chris), a baby (Stewie), and a pet dog (Brian). For a long period, a typical American family was regarded as a family structure that consisted of a man, his wife, and one or more biological or adopted children. By viewing the Griffins family from a psychological viewpoint, it will be able to demonstrate whether the Griffins family is not an accurate portrayal of the typical American family. Evaluating the Typical American Family and The Griffins’ Families in America have increasingly become more diverse, and more complex, compared to the “Leave it to the Beaver” ideal, where the perfect family I. Introduction Parenthood, a drama television series, attends to the adversity of an extended and imperfect family. The Bravermans are a blended California family who face a series of both fortunate and unfortunate events but together find a way to get by (Katims, 2010). Television consumers have been introduced to many fictional families overtime and continue to fall in love with family related television shows. Historically, the media has transformed and continues to adapt to the changes in present day family types. “Writers often take seeds from real life experiences and plant then in their scripts,” consumers both consciously or subconsciously attend to cues on television and want to apply what they see to their lives. This essay discusses how the family is viewed by two different sociological perspectives- functionalism and conflict theory. Firstly, ‘family’ is defined. Secondly, the main ideas of functionalism will be discussed followed by how this theory perceives the family. The main ideas of Conflict Theory will then be examined and how conflict theorists perceive the family. Despite the creator’s of Modern Family effort to portray a progressive view of American families, the show still accentuates outdated female stereotypes and gender roles; reinforcing gender characteristics, patriarchy and hegemonic masculinity. In contrast to its title, Modern Family promotes traditional gender roles and stereotypes of women, which result in the portrayal of an inaccurate image of the female, and weakens the stance of women in today’s U.S. society. Gender stereotypes are prevalent throughout the Modern Family; the women are all portrayed as wives and mothers, promoting a continued male dominant family ideology. Claire and Gloria are throughout the show acting on our society’s “assumptions about women’s ‘appropriate’ roles” (Dow 19). I am also better able to see that deep down, the show produces positive messages about family, relationships, risk-taking, and self-discovery. In essence, the environment of Family Guy is existential, where characters have the ability to make extreme choices; this allows episodes The past decade has not seen any notable family sitcoms that has surpassed such leaps of social justice as some had in the 1950’s or 1970’s. While that may be disappointing to some, this is also a great feat for all television audiences. So many issues that were once considered, “taboos,” now, can be the premise of the sitcom altogether. Even the little things like interracial couples, married partners in the same bed, and even mentioning a pregnant woman is considered normal. Yes, the family sitcom is still no direct comparison to the modern family arrangement, but it is as close as were going to get for I feel that this class has changed my whole perception of what family work is, the importance of not getting caught up in the content and focussing on the process of identifying strengths that the family has which can be used to perpetuate ongoing homeostasis. This course also highlighted for me how much more I still need to learn about supporting the family system. I have been working with families for about 10 years, mostly with supporting positive parenting and also with families who have children and youth experiencing mental health concerns. I feel that my process orientated interactions have been effective for my gathering of information but not necessarily helpful for the long-term healthy coping of the family. By watching you, listening to your teachings and participating and observing role plays I feel that these experiences have led to not only practical knowledge but a new perspective of the importance of stepping back and trying to walk in the client’s shoes.
https://www.ipl.org/essay/Essay-On-Modern-Family-FKTYHC3RJ48R
The “prior art” drawing shows a brush applicator of a kind known from the prior art, intended for use as a cosmetic brush. Such a brush is composed of a bundle of hair or bristles that is long, more or less, and is secured in a holder element. A wide variety of cosmetic brushes are known from the prior art. They are used not only for applying powder, but often also for applying viscous cosmetics, i.e. ones that run the gamut from liquid to paste-like or gel-like, such as lip gloss. Depending on the intended use, such brush applicators have a densely packed number of fine, relatively long bristles. In the context of this description, the term “bristles” is understood to broadly refer to any fiber-like structure suitable for producing a brush. These bristles are very flexible in the region of their distal ends. But below approximately the last distal quarter of their length in the proximal direction, they rest against one another more and more. Both when the brush is new and when it is influenced by the cosmetic, which tends to make the bristles stick to one another, the bristles form a kind of “block” that is significantly more rigid than the individual, fine bristles in the region of their distal ends. This gives a brush with a set of long, fine bristles its typical application properties, namely a soft brush tip, but a set of bristles that is nevertheless not overly flexible. With prolonged use, even with careful selection of materials, a swelling of the bristle material can occur, which causes the brush as a whole to swell, thus negatively affecting its shape and application properties. Also, especially in brushes composed of long, fine bristles, it is almost inevitable that when the brush is reinserted through the narrow neck of the bottle or stripping device, individual bristles get caught on the sides and as a result, become permanently bent so that they stick out to the side afterward. Even if individual bristles do not buckle completely, in brushes composed of long, fine bristles there is always the risk that over time, a certain “umbrella effect,” namely a certain splaying of the set of bristles, will occur. Finally, brushes with a set of densely packed, relatively long, fine bristles are also not without problems because there is always a risk that in the region a certain distance from the distal ends of the fibers, bacteria will collect and multiply “on the inside,” so to speak, of the set of fibers constituting the brush. To remedy this problem, numerous suggestions have been made to replace the brush-like part with a “monolithic” body composed of a flexible plastic or elastomer material, whose outer contour has roughly the same outer contour as a brush. In such an approach, a plastic body with a smooth, intrinsically closed surface is first flocked to improve its product storage capacity. A “brush applicator” produced in this way does in fact keep its shape very well, but does not really have a satisfactory product storage capacity. Also, the tip of such a brush applicator is significantly harder than the tip of a brush applicator composed of a number of fine, relatively long bristles. Finally, the customary brush applicators are relatively expensive to manufacture. In light of this situation, the object of the invention is to create a brush applicator that is dimensionally stable over the long term, offers a good product storage capacity, and has a tip region that permits a precisely contoured application.
Many enterprises’ core activities and business models revolve around gathering and sharing user-related data, but there are often gaps around protecting user privacy and fostering trust—forcing them to take reactive steps to catch up with customers’ privacy expectations and comply with privacy regulations. ISACA’s new publication, Privacy by Design and Default: A Primer, gives organizations and professionals the strategies and techniques to take a proactive approach to building in privacy considerations. Privacy by design challenges conventional system thinking. It mandates that any system, process or infrastructure that uses personal data consider privacy throughout its development life cycle and identify possible risk to the rights and freedoms of the data subjects and minimize them before they can cause actual damage. Among the privacy techniques and privacy design strategies shared in Privacy by Design and Default are a core set of eight privacy design strategy components, including: - Minimize: The personal data processed should be restricted to the minimal amount necessary. For example, only requesting an individual’s birth year rather than the actual birth date should be sufficient for age-restricted services. - Hide: Personal data and their interrelationships should be hidden from plain view. For example, the Payment Card Industry Data Security Standard (PCI DSS) requires that only the last four digits of a credit card number be printed on a receipt. - Inform: Whenever data subjects use a system, they should be informed about which information is processed, for what purpose and by what means. Privacy by Design and Default walks through not only the key concepts and foundational principles behind privacy by design, but also topics including cybersecurity and privacy risk, privacy engineering, and privacy protection in IT system design. It also includes a timeline on key global privacy regulations—including the General Data Protection Regulation (GDPR) in Europe, Lei Geral de Protecao de Dados Pessoais in Brazil, and the Amended Act on the Protection of Personal Information in Japan—and their evolution. “The privacy by design approach ensures that data can continue to be used by enterprises in a way that respects data subject privacy,” says Safia Kazi, ISACA Privacy Professional Practices Associate. “When an enterprise understands how it collects, stores and uses data, this leads to increased confidence and trust in the data on which it bases strategic decisions—and that enhances trust between the enterprise and its customers.” ISACA is also offering a companion course on privacy by design. This course provides learners with an introduction to privacy by design along with interactive scenarios and knowledge checks to test understanding of privacy by design concepts. Those who participate in this virtual, self-paced course will gain a holistic understanding of privacy by design, including its foundational principles and technology that can support it. Privacy by Design and Default: A Primer is US $60 for members and $90 for nonmembers and is available in a digital format or in print at https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004Ko9tEAC. The Privacy by Design and Default Online Course is US $49 for members and $79 for nonmembers, and is available at store.isaca.org/s/store#/store/browse/detail/a2S4w000004L1vrEAC. To discuss topics around privacy, visit ISACA’s online Privacy community on the Engage platform. Additional privacy resources, including the Certified Data Privacy Solutions Engineer (CDPSE) credential, are available here. About ISACA For more than 50 years, ISACA® (www.isaca.org) has advanced the best talent, expertise and learning in technology. ISACA equips individuals with knowledge, credentials, education and community to progress their careers and transform their organizations, and enables enterprises to train and build quality teams. ISACA is a global professional association and learning organization that leverages the expertise of its more than 150,000 members who work in information security, governance, assurance, risk and privacy to drive innovation through technology. It has a presence in 188 countries, including more than 220 chapters worldwide. In 2020, ISACA launched One In Tech, a philanthropic foundation that supports IT education and career pathways for under-resourced, under-represented populations.
https://twebt.net/archives/13354
The following course learning outcomes are assessed by completing this assessment: A1. Prepare a basic solution to a business problem; A2. Select appropriate IT solutions for business functions; A3. Apply business information software for data visualization and analysis purposes. S1. Write basic programming logic; S3. Interpret and construct representations of business data flow and processes; K8. Outline the basic principles of programming. Assessment Details Resurrection* is a small second-hand book shop, run by a team of three staff members: Alexander, Bertha, and Cheryl. For this assignment, you will complete the following set of tasks using Excel, and build an ePortfolio page to describe your work. Task 0 – Setting up Create an ePortfolio page for your assignment. You will submit this page to Moodle as per the lab tasks. You may call it whatever you like, for example ITECH1100 Assignment 1 – 30349759. Hours of operation Most of the time, each team member works separate shifts: Alexander works 9:00 AM to 12:30 PM, Monday, Tuesday and Wednesday; Bertha works 9:00 AM to 12:30 PM, Thursday, Friday and Saturday; and Cheryl works 12:30 PM to 5:30 PM, Monday to Friday. The shop runs from 9:00 AM to 5:30 PM each day, except Saturday when it closes at 12:30. It is closed entirely on Sundays. Costs Alexander and Bertha are the semi-retired co-owners of the bookshop, and do not take a salary. Cheryl, however, is a part-time employee, with total employment costs of $22 per hour. Cheryl is also entitled to four weeks of paid annual leave, during which a casual replacement is required at a cost of $33 per hour. Fixed costs such as rent and insurance are $2900 per year, and utilities costs are $140 per month. Task 1 – Costs of operating the business Using Excel, create a spreadsheet called operating_costs.xlsx that calculates the projected annual outgoing costs of running Resurrection. Your spreadsheet should be configured such that the working hours, hourly rates, and fixed and utility costs can be varied easily. Document your findings in your ePortfolio page (approximately 100 words). Obtaining Books Throughout the day, customers come to Resurrection to buy books, and occasionally to sell them. When people bring in books to sell to Resurrection, the staff member on duty will review the books and make an offer for each one individually. Currently, the process for determining how much the book shop will pay for a book is entirely subjective. Staff members offer an amount per book based on the quality of the book, and how popular they think it is, and how they are feeling at the time. The sale amount is always simply double the amount paid and is set at the same time – if Resurrection pays $3.00 for a book, they will put it on sale with a marked price of $6.00. ( Resurrection does not deal in rare or antique books) For several months, staff have been keeping track of the date and time, quality, publication year and amount paid for each book in a spreadsheet. This spreadsheet is available for download on Moodle. Task 2 – Sales team offers Using Excel, process the history of purchases spreadsheet and use appropriate charts to visualize: How the prices paid differed for each staff member; and How the prices paid have changed over time Describe your findings in your ePortfolio (approximately 250 words), and attach the Excel file to your page. Process automation Alexander and Bertha want to improve the consistency of how they pay for and price books. They’d like to standardize on three standard price offers, and have designed the following process to determine how to allocate them, including the option of rejecting the book altogether. The staff have already agreed on how to determine whether a book is terrible, poor, or good. Task 3 – Automation Using Excel, create a spreadsheet page that automates the above process, allowing a member of the team to enter whether the book is hardcover, its publication year, and its condition, and receive a price to offer. Ensure that you include enough text and formatting to make your spreadsheet usable by a member of the Resurrection team, or by a University lecturer. Your spreadsheet should be configured such that the Low, Medium and High prices can be varied easily. Describe how you automated the process (approx 150 words), and attach your Excel file to your ePortfolio. For expert online assignment help such as Business Management Assignment Help, visit www.makemyassignments.com.
https://www.makemyassignments.com/solution-library/itech1100-understanding-the-digital-revolution/
Summary of past findings: Research Master Thesis During my masters I studied social learning and independent exploration in Sumatran orangutans (Pongo abelii) and how infants and juveniles learn their diets. I measured number of processing steps of different food items and found that when a mother was foraging on more complex food items, the more attentive infants were in watching their mums while eating. Also rarer items received more attention than when the mother was feeding on items that are more frequent in the diet. PhD Thesis Testing the Cultural Intelligence Hypothesis in Orangutans: Variation in Novelty Response, Exploration and Intelligence Although humans have evolved specialized adaptations making us a culture-dependent species, during my PhD I probed the idea that processes underlying cultural learning are more widely distributed and impact cognitive abilities also in nonhuman primates. I conducted multiple projects to test the presence of culturally constructed intelligence in orangutans. I) Wild and captive orangutans respond differently to new things I have studied reactions to novel things in both wild and captive orangutans. My findings suggest that despite the common belief, orangutans are ironically not naturally curious toward novelty and in fact avoid it. By systematically measuring reactions to same novelty in captivity I demonstrated a Captivity Effect and that in contrast to cautious wild individuals, captive orangutans showed a more curious response to new objects (Forss et al. 2015). Thus, even though innovations are the building blocks of what we define as orangutan culture, individuals do not boldly explore novel things and rarely invent under natural conditions. Consequently, the individual pathway, assuming innovations to appear from intrinsic curiosity expressed through novelty response seems not to be the case for orangutans. Instead, my work support the notion that innovations arise because the acquisition of skills through social learning can occasionally result in modifications of existing innovations Picture Descriptions – Left: Time to approach novel objects in wild and captive orangutans. Right: Number of days plotted in relation to distance to platform presenting novel objects in wild orangutans. In a theoretical review of the literature (Forss, Koski & van Schaik 2017), we found that social (cultural) species prefer attending to social information if available, when confronted with novelty. In slow paced life-history species, that depend on social learning in acquistion of their species-specific diet, using information provided by experts or role models may be the preferable strategy when encountering novelty. This implies that innovations are retained as a consequence of effective social learning ability. In long-lived species characterized by an extended developmental phase – like the orangutans – socially protected learning represents an adaptation through which risks can be avoided. On the contrary, the very same species, which have evolved to effectively learn skills from conspecifics can be made to express the opposite behavior – novelty-based curiosity – once the constraints of risk are removed. One such case is when animals are kept in artificial and risk-free habitats, like the enriched conditions of captivity provided by modern zoos. Captive animals experience different environments from those of conspecifics in natural habitats, which can affect their cognition – a process that can give rise to the captivity effect, the phenomenon that captive animals show greatly different psychological attitudes and cognitive abilities compared with their wild counterparts. II) The outcome of Human orientation in captive apes The findings from my PhD also show that the effects of a life in captivity is more than loss of neophobia – especially in great apes – who are adapted to learn from others and therefore also can attend to humans as additional role models. Accordingly their intensified social environment boosts attentiveness, causing further cognitive changes. In a joint study with my collegue Dr. Laura Damerius, we showed that among orangutans, the life experience with humans changes their attitudes toward novelty, which in turn resulted in better physical cognitive skills. We measured the way apes behave towards humans and this Human Orientation Index predicted their successs in a novel problem-solving task (Damerius & Forss et al. 2016). Beyond improved actual performance, individuals with higher human orientation were also more curious and creative: they explored the task longer and used more variable exploratory actions while doing so. Thus when considering primate cognition, the introduced HOI-measurement describes a continuous process, of which the end point is enculturation. Hence, among captive apes cognitive skills may never be culture-free due to the effects of individuals´ specific experiences. Picture description – A: These graphs show how Human Orientation Index (HOI) predict the duration of orangutans exploration of the novel rpoblem-solving apparatus and C) the variety of exploration actions they used. B) Both the exploration duration and D) exploration variety in turn predicted success in the problem-solving task. Picture description – Summary on how Human Orientation (HOI) influence cognitive performance III) The Cultural Intelligence Hypothesis in Pongo My research has also delivered the first empirical test of the Cultural Intelligence Hypothesis in a non-human taxon. In their natural habitats, Sumatran and Bornean orang-utans differ systematically in the frequency of the opportunities for social learning. Sumatran populations show higher densities and are consistently more gregarious and socially tolerant. They also show much greater repertoires of learned skills and exploratory behaviour, along with greater cultural repertoires in general. This difference in socio-ecology has likely persisted over evolutionary time. Thus, under homogeneous environmental conditions provided by zoos, I probed the idea that over an evolutionary time scale the socio-ecological destictions have produced cognitive differences between the two Pongo species. To control for experience effects, I compared a sample containing only standard mother-reared individuals, with similar human orientation values, living in highly similar zoo environments. Here intrinsic differences in cognitive performance appeared: Sumatran orangutans outperformed the Borneans (Forss et al. 2016). My findings show that not only do they differ in physical problem-solving performance, but also in underlying mechanisms such as inhibitory control and caution, traits that may very well be under selection when selection is on the efficiency of social skill transmission, and thus crucial for cultural learning to take place. Picture descriptions – Left: The differences in task success between Bornean and Sumatran Orangutans in relation to task difficulty. Right: Proportion of orangutans that solved the different problem-solving tasks compared between the two orangutan species. Early Postdoc Mobility Fellowship I continued the line of research started during my PhD but expanded the study system to other great apes: bonobos and chimpanzees. I then applied same tests on curiosity and problem-solving and could confirm some interesting species differences. Pongo and Pan differ in their responses to new food items I measured food reactions and exploration behavior in over 130 ape subjects in four great ape species: Pongo abelii, Pongo pygmaeus, Pan troglodytes and Pan paniscus and found that species differ considerably in their responses. The solitary orangutan species show less neophobic reactions than the more social Pan species. Chimpanzees were most explorative of novel foods and would be more likely to taste novel food when tested in their social groups. In the publication, we discuss these results in the light of the social information hypothesis (Forss et al. 2018). The social information hypothesis states that species showing the paradoxical combination of strong neophobia and strong exploration tendency can rely on social information to select aspects of the environment worth exploring. Picture description – Likelihood of apes of different species to try new food Picture descriptions – Left: Average latencies to consume novel foods by three age classes of each species of great ape. Immautures are slightly more careful in their approach latencies suggesting that this age category normally rely on social information when attending to novel food sources. Right: Chimpanzee female Ikuru at Ngamba Island Chimpanzee Sanctuary Early rearing conditions influence problem-solving skills in chimpanzees During my Postdoc fellowship at university of Tübingen I investigate how early rearing conditions affect physical cognition in over 60 chimpanzees through different physical problem-solving tasks: The detour reaching task, the visible honey trap task, a tube trap task and a reversal learning task. These tasks all measure different aspects of physical cognition. I collected this data on chimpanzees that varied greatly in the early social environmental conditions prior to two years of age. My results show that these first years of life have an influence on how well the chimpanzees performed in the cognitive tests (Forss et al. In revisions). Considering the vast literature and debate on how early environment influence human infants’ cognitive development, our results on chimpanzees add some important knowledge for the field of developmental psychology and comparative cognition. Results coming public soon… Picture descriptions – A) Detour reaching task measuring inhibitory control, B) Honey trap task measuring problem-solving with different types of tool use, C)Tube trap task measuring causal understanding and flexibility and D) Reversal Learning task measuring working memory, inhibitory control and flexibility. Methods I prefer to work with both wild and captive populations of the same species to adress both ecological vadility and the evolutionary adaptation of cognitive traits in natural habitats, but also to be able to perform controllable cognition experiments in captivity. To assess reactions to novelty in wild populations, I use motion-triggered-video camera traps to avoid human influences, as habituation to humans has been confirmed to influence neophobia (Damerius & Forss et al. 2017). Collaborative institutions Inkawu vervet project & WATCH Sanctuary Ngamba island chimpanzee sanctuary A great variety of zoological gardens: Germany: Wolfgang Köhler Primatenzentrum, Max-Planck Institute for Evolutionary Anthopology, Leipzig Zoo, Berlin Zoo, Frankfurt Zoo, Allwetterzoo Münster & Zoo Dortmund Switzerland: Basel Zoo & Zoo Zürich United Kingdom: Paignton Zoo, Blackpool Zoo, Twycross Zoo & Durrell Wildlife Park The Netherlands: Apenheul Primate Park Funding Swiss National Science Foundation Forschungskredit, University of Zurich Wenner Gren Foundation Foundations in my homeland Finland:
https://www.sofiaforss.com/research/
Oxford particle physicists, working with colleagues at CERN’s Large Hadron Collider (LHC), have just released results of their search for some of the most sought-after particles in physics. The particles the team are seeking are relatives of the famous Higgs boson that are predicted by a theory known as Supersymmetry. Research Highlights for ATLAS Oxford Group Prof. Todd Huffman, Prof. Jeff Tseng, and project student Charles Jackson of the Oxford ATLAS Exotics Group have recently published a paper on a new method of tagging ultra-high energy B hadrons in jets. The paper featured on the cover of the Journal of Physics G: Nuclear and Particle Physics. First year Oxford graduate student Jesse Liu has just released a paper showing how the increase in LHC energy from 8 to 13 TeV has squeezed the permissible models of the theory of supersymmetry. Supersymmetric theories predicts particles that could help explain the mysterious dark matter in our universe, and which can be produced at the LHC, so they are well worth pursuing. University of Oxford graduate students have led the first paper for Supersymmetry using the full 2015 data-set from the ATLAS experiment at the CERN Large Hadron Collider, with the whole Oxford SUSY group working together to complete it in record time. This is the first search for supersymmetry anywhere in the world to use data collected with the higher center-of-mass energy of 13 TeV. The increase in energy has allowed the analysis team to explore further than ever before, and they have put the tightest constraints on the existence of these new particles yet. Several Oxford led results were presented at the LHC Experiments Committee (LHCC) open meeting at CERN, on Wednesday the 02.12.2015. The LHCC reviews on a regular basis the accelerator and the experiments. At the open presentations the experiments have the opportunity to present their latest and most important results. Oxford physics graduate student Will Fawcett, working with an international team at CERN, is delighted to have completed the most comprehensive assessment to date of the impact of the Large Hadron Collider (LHC) on a leading theory of subatomic physics. Dr James Frost and Dr Koichi Nagai, both members of the Oxford ATLAS group received the ATLAS Outstanding Achievement Awards. James received it for his outstanding contributions to the Data Preparation area, particularly for serving as Prompt Reconstruction Operations Coordinator (PROC) and Data Quality (DQ) convener during the long shutdown one (LS1) of the LHC. His work allowed cosmic data to be reconstructed and monitored, without which the detector couldn't have been commissioned for the 13 TeV start of the LHC last week. Oxford graduate students in the thick of the action as the LHC produces stable colliding beams at 13 TeV CERN, Geneva 3 June 2015 Two Oxford graduate students, Will Fawcett and Will Kalderon (see first Figure: Will K standing left, Will F seated center), found themselves at the heart of a jam-packed CERN control room this morning, helping run the ATLAS experiment as the Large Hadron Collider smashed together two beams of subatomic protons, to generate the world’s highest-energy collisions. The two Wills are among 7 Oxford graduate students on attachment to the European Laboratory for Particle Physics, working on the world’s biggest experiments. The first ever search for the supersymmetric partner of the charm quark, led by Oxford graduate student Will Kalderon, has been selected by the ATLAS experiment at CERN as one of its physics highlights of the first run of the LHC. The Oxford ATLAS group performed analysis central to the recently published W+jets and R-jets measurements. Graduate student, Craig Sawyer, worked with ATLAS collaborators to perform these precision Standard Model measurements which extend such measurements to higher energies and regions than have ever been explored. The Oxford ATLAS group played a decisive role in a newly released analysis searching for exotic particles decaying to two jets in ATLAS. The ATLAS dijet search analysis team was led by Oxford student, Katherine Pachal member of the Oxford Exotics Group. University of Oxford graduate students have led the first two papers for Supersymmetry using the full data-set from the ATLAS experiment at the CERN Large Hadron Collider. Following the announcement of a discovery of a new boson on July 4th, 2012, focus turned to determining whether this new boson had the key properties of the predicted Higgs boson: production and decay rates to fermions and electroweak bosons determined by the particles' masses, and no intrinsic spin. The Oxford group played a leading role in the ATLAS measurement of the strange quark parton density in the proton. The ATLAS W- and Z-boson data indicate that the strange quark is not suppressed at low values of Bjorken x, the momentum that the quark which is struck in a collision takes as a fraction of the proton’s momentum. The Oxford ATLAS group played a key role in the first 13 TeV ATLAS search paper, which was recently released to the public. The analysis presented in the paper searches for new particles that decay into two back-to-back jets (dijets), using data collected by the ATLAS detector in 2015.
https://www2.physics.ox.ac.uk/research/atlas/highlights
Since the 1970s, The Searchers, directed by John Ford, has become one of the most discussed films of 1950s US cinema. A story of captivity and revenge set in post–Civil War Texas, The Searchers is now regarded as one of the best films ever made, although it received mixed reviews upon its original release. The film’s artistic reputation did not rise until the early 1970s, buoyed by auteur critics like Andrew Sarris and Peter Bogdanovich and by film school–trained directors like Martin Scorsese and Steven Spielberg, who paid homage to The Searchers in their own movies. An important trend in scholarship coalesced around the film’s depiction of fear of miscegenation, with literary antecedents illuminated by June Namias, Barbara Mortimer, and Richard Slotkin. A significant number of considerations of The Searchers focus on Ethan Edwards, the psychologically complex Indian hater played by John Wayne. Many film scholars address the film’s relationship to genre, with Edward Buscombe and Peter Cowie calling attention to the film’s debt to pre-cinematic visual representations of the frontier. Gaylyn Studlar and Hubert I. Cohen emphasize the film’s break from western conventions. Major biographies of John Ford by Scott Eyman, Joseph McBride, and Tag Gallagher provide insight into the film’s production history, as does Glenn Frankel. Analysis of The Searchers has been sustained by many academic scholars who are not film specialists, by literary critics such as Jane Tompkins; political scientists such as Robert Pippin; Native American studies scholars such as Tom Grayson Colonnese and Cristine Soliz; philosophers such as Richard A. Gilmore; feminist critics such as Susan Courtney; historians, including James F. Brooks; and classicists, such as Martin M. Winkler and James Clauss. In spite of the variety of methodological approaches applied, the literature on The Searchers often seems to follow the nonlinear trajectory of the film’s own narrative with a retreading of familiar terrain. Anthologies A useful starting point for study of the film, Eckstein and Lehman 2004 is the only anthology devoted exclusively to The Searchers. Studlar and Bernstein 2001 contains articles that each purposefully reference a number of Ford’s westerns, including The Searchers, in relation to a central issue. Kitses and Rickman 1998 is typical of the many anthologies on the western that include articles devoted to The Searchers among the discussion of a variety of films. Eckstein, Arthur M., and Peter Lehman, eds. The Searchers: Essays and Reflections on John Ford’s Classic Western. Detroit: Wayne State University Press, 2004. Addresses a wide range of aspects of the film through a number of reprinted and new essays. Although dominated by film scholars, the anthology also has many useful contributions that are interdisciplinary in cast. Kitses, Jim, and Gregg Rickman, eds. The Western Reader. New York: Limelight Editions, 1998. Especially useful for including classic essays on the western by Robert Warshow and Andre Bazin that appeared shortly before the release of The Searchers as well as three articles with extended discussions of the film in relation to representations of the home, Indians, and space. Studlar, Gaylyn, and Matthew Bernstein, eds. John Ford Made Westerns: Filming the Legend in the Sound Era. Bloomington: Indiana University Press, 2001. Articles by extensively published film scholars address topics such as capitalism, ethnicity and multiculturalism, femininity, music, narrative structure, Ford’s reputation, and significant influences on the sound-era westerns, such as painting and literature. Users without a subscription are not able to see the full content on this page. Please subscribe or login. How to Subscribe Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here. Article - 2001: A Space Odyssey - Accounting, Motion Picture - Acting - Action Cinema - Adaptation - Advertising and Promotion - African American Cinema - African American Stars - African Cinema - Akerman, Chantal - Allen, Woody - Almodóvar, Pedro - Alphaville - Altman, Robert - American Cinema, 1895-1915 - American Cinema, 1939-1975 - American Cinema, 1976 to Present - American Independent Cinema - American Independent Cinema, Producers - American Public Broadcasting - Animals in Film and Media - Animation and the Animated Film - Anime - Arbuckle, Roscoe - Argentine Cinema - Aronofsky, Darren - Art Cinema - Arzner, Dorothy - Asian American Cinema - Asian Television - Astaire, Fred and Rogers, Ginger - Australian Cinema - Auteurism - Avant-Garde and Experimental Film - Bachchan, Amitabh - Battle of Algiers, The - Bazin, André - Bergman, Ingmar - Bertolucci, Bernardo - Bigelow, Kathryn - Biopics - Birth of a Nation, The - Blade Runner - Blockbusters - Brakhage, Stan - Brando, Marlon - Brazilian Cinema - Bresson, Robert - British Cinema - Broadcasting, Australian - Burnett, Charles - Buñuel, Luis - Cameron, James - Campion, Jane - Canadian Cinema - Capra, Frank - Casablanca - Cassavetes, John - Cavell, Stanley - Censorship - Chan, Jackie - Chaplin, Charles - Children in Film - Chinese Cinema - Cinecittà Studios - Cinema and Media Industries, Creative Labor in - Cinema and the Visual Arts - Cinematography and Cinematographers - Cinephilia - Citizen Kane - City in Film, The - Cocteau, Jean - Coen Brothers, The - Cognitive Film Theory - Color - Comedy, Film - Comedy, Television - Comics, Film, and Media - Computer-Generated Imagery - Coppola, Francis Ford - Copyright and Piracy - Corman, Roger - Costume and Fashion - Cronenberg, David - Cuban Cinema - Cult Cinema - Dance and Film - de Oliveira, Manoel - Dean, James - Deleuze, Gilles - Denis, Claire - Deren, Maya - Design, Art, Set, and Production - Detective Films - Dietrich, Marlene - Digital Media and Convergence Culture - Directors - Disability - Disney, Walt - Doctor Who - Documentary Film - Dreyer, Carl Theodor - Eastwood, Clint - Ecocinema - Eisenstein, Sergei - Epic Film - Essay Film - Ethnographic Film - European Television - Exhibition and Distribution - Exploitation Film - Fairbanks, Douglas - Fan Studies - Fantasy - Fellini, Federico - Feminist Film Theory - Festivals - Film Aesthetics - Film and Literature - Film, Historical - Film Noir - Film Preservation and Restoration - Film Theory - Film Theory Before 1945 - Finance Film, The - Ford, John - French Cinema - Game of Thrones - Gance, Abel - Gangster Films - Garbo, Greta - Garland, Judy - Gay, Lesbian, Bisexual, Queer, and Transgendered (GLBQT) C... - German Cinema - Global Television Industry - Godard, Jean-Luc - Godfather Trilogy, The - Godzilla - Greek Cinema - Griffith, D.W.
http://www.oxfordbibliographies.com/view/document/obo-9780199791286/obo-9780199791286-0143.xml
The Commission has been asked for an Advisory Opinion as to whether Administrative Law Judges and Hearing Officers who hear and decide contested cases for State and local executive branch agencies are subject to the Code of Judicial Conduct. The Constitution of the State of Georgia of 1983, Article VI, Section VII, Paragraph VI provides in part: The power to discipline, remove and cause involuntary retirement of Judges [our underscoring] shall be vested in the Judicial Qualifications Commission. Article VI, Section VII, Paragraph VII of the Constitution also refers to Judges and authorizes the Supreme Court to adopt rules of implementation. The Supreme Court in its opinion of Judicial Qualifications Commission v. Lowenstein, 252 Ga. 432 at 433: It follows that this Court possesses the authority to regulate the conduct of Judges – including conduct during judicial elections. The compliance section of the judicial code provides for its application to anyone who is “an officer of a judicial system performing judicial functions . . . .” This limitation does not exclude a Hearing Officer or an Administrative Law Judge if he is in fact performing the functions of a judge. In Opinion No. 66, this Commission has held that Directors of the State Board of Workers Compensation are subject to the Code of Judicial Conduct, and this opinion has been cited by the Court of Appeals in Delta Air Lines Inc. v. McDaniel, 176 Ga. App. 523 (1985). Also in Opinion No. 94, the Commission, in response to a question, advised a Hearing Officer retained part-time by the Georgia Public Service Commission that his conduct met the requirements of the judicial code with the implication at least that he was subject to the Code. In Bentley v. Christian, 242 Ga. 348, 350 (1978), the Supreme Court defines an administrative agency as “a governmental authority, other than a Court or other than a legislative body, which affects the rights of private parties through either adjudication or rule making.” The Court continued “. . . they in addition, take on a judicial coloring in that frequently, within the exercise of their power, they are called to make factual determinations and thus adjudicate, and it is in that sense that they are also recurrently considered to be acting in a quasi-judicial capacity.” It is difficult at times to distinguish between judicial or quasi-judicial proceedings and executive or quasi-executive proceedings. In Mayor, etc. of Union Point, et al., v. Jones, et al., 88 Ga. App. 848 (1953), the Court of Appeals offered general guidelines. Judicial or quasi-judicial action implies the interpretation, application and enforcement of existing law relating to subsequent acts of persons amenable thereto, rather than the setting up of rights and inhibitions; and embraces the determination of rights and interests of the adverse parties who are entitled, before adjudication, to notice and hearing, and the opportunity to present evidence under judicial forms. Administrative action, as defined in this State, includes not only merely ministerial actions, but many decisions by responsible public officers involving judgment and discretion; and such officers, in arriving at decisions, may be free to investigate and determine proper methods and procedures, although their final decision is ex parte in nature, as distinguished from decisions based upon evidence, which parties at interest have an absolute right to present and insist upon. The Supreme Court of Georgia states, in distinguishing between an administrative and judicial act . . . is that a quasi-judicial action, contrary to an administrative function, is one in which all parties are as a matter of right entitled to notice and to a hearing, with the opportunity afforded to present evidence under judicial forms of procedure; and that no one deprived of such rights is bound by the action taken. (Southview Cemetery Association v. Hailey, Chairman, et al., 199 Ga. 478, 481(1945)). In the opinion of this Commission, any Administrative Law Judge or Hearing Officer who presides in a judicial or quasi-judicial proceeding, irrespective of what he or she is called and whether or not he or she is presiding for an executive branch agency, is a judge within the meaning of the Constitutional provisions and the Rules of the Supreme Court governing this Commission, and is therefore subject to the Code of Judicial Conduct.
https://gajqc.gov/advisory-opinions/opinion-122/
PwC suffered the latest blow to its reputation late on Thursday as Ukraine’s central bank pulled its bank auditing rights in the country for failing to identify alleged improprieties that led to a $5.5bn balance-sheet hole at PrivatBank, the country’s largest lender, according to The Financial Times (FT). The big four accounting group, which denies fault, had audited PrivatBank from the mid-1990s until it was nationalised late last year after government claims of massive related-party lending, the FT said. Following months of media speculation about a decision, the National Bank of Ukraine in a statement said it had "removed" PwC's domestic subsidiary "from the register of accounting firms authorised to audit banks". "The rationale behind this decision was PwC Audit LLC's verification of misrepresented financial information in the financial statements of [PrivatBank,]" the central bank said. "In particular, this concerns information on credit exposure and regulatory capital reported by the bank. The audit report issued by PwC Audit LLC failed to highlight the credit risk exposure faced by PrivatBank, which led to the bank being declared insolvent and nationalised, with substantial recapitalisation costs borne by the state." The decision marks the latest reputational blow to big four accountancy firms and further darkens the clouds hanging over PwC, in particular. It faces a probe by the UK accountancy watchdog over its audit of BT's Italian division. Read alsoKolomoisky comments on EY audit report on nationalized PrivatBankDescribing the group as "very disappointed," PwC said: "We will examine all options for reversing this decision. We remain committed to the Ukrainian market." In comments to the Financial Times this month, Ukraine's central bank made an explicit warning of "adequate measures" it planned to take in the "nearest future" while describing itself as "deeply disappointed by the quality of audit done by PwC for PrivatBank." In those comments, Ukraine's central bank said it was moving cautiously to ensure a "strong legal position, which may be challenged by PwC". PwC maintained that the group had "performed its audit of PrivatBank's 2015 financial statements in accordance with international auditing standards", including "a qualification in respect to related-party transactions". "We have asked the NBU repeatedly to provide us with specific concerns it has about the quality of our audit . . . We have also asked for any detailed evidence it has that shows that those financial statements were incorrect. Up to now, the NBU has not provided any specifics," it said.
https://www.unian.info/economics/2040761-ft-ukraine-pulls-pwc-bank-audit-licence-over-55bn-hole-at-top-lender.html
In a pretrial hearing, a few months before the Pinyon Pines Murders trial was to start, a series of guidelines were being established about how the trial was to be conducted. Over a two-plus day hearing, a pattern was developing: The defense could either concur with the prosecution, or dissent and get shot down. Should the defense venture to propose their own motion, it would simply get shot down. Watching this pattern repeat itself on issues of ever-larger importance revealed another pattern: how the judge appeared to think out loud, so as to put his reasoning on the record. His reasoning was either short and rigid, or long and meandering. When his reasoning granted value to a defense position, he would continue is reasoning until it didn’t. He may make several stops along the way, his eyes searching up and to the left, then to the right, searching for a reason to finally discredit the defense motion. Then he would state the found reason, elaborate for a moment, then affirm the denial. This happened again and again. To observers inclined for the defense, the emotional whiplash of seeing a budding sense of hope get quickly crushed became a familiar and repetitive sensation. Over time, there became a gnawing sense that this judge was going to side with the prosecution reliably. Towards the end of the second day, a very significant motion was raised: whether to allow a discussion of so-called ‘3rd party culpability‘. Simply put, if a certain alternate murder scenario is plausible, and rises to a certain level of evidentiary credibility, then it should be open for discussion in front of the jury. A certain murder scenario was compellingly presented by attorney John Patrick Dolan, backed up by reports in the body of investigative work developed by Riverside County Sheriff’s investigators. The evidence contained in this scenario was mostly circumstantial, just like that of the core case that is the focus of the hearing. In fact. the parallels of evidence were striking: matching shoes to footprints, cell phone records, or lack thereof, personal relationships with the victims, shocking witness testimonials, etc. Unlike the core case. there was even a potential motive. In fact, there were some additional elements that would make any jury sit up and take notice. Add to this the family connections the subjects had with a local politician and the district attorney’s office in general, and suddenly the subjects of this alternate scenario take on a more significant value. Suspicious minds may start to be watchful of inappropriate protectionism. Observers were in disbelief. Even a two-bit TV comic detective could spot the reason: The subject could have been starting a cover-up. If so, it seemed to work superbly. So the jury was denied an extremely critical bit of information. One that would affect their ‘reasonable doubt’ calculation. In the absence of a credible alternative, the jury is left with a limited choice: either the prosecution’s case or nothing at all. A ‘not guilty’ verdict would likely mean that this case would go unsolved. The families of the victims would never see ‘justice’. With the introduction of an alternative scenario, the jury’s decision is substantially different: story A or story B? Given an option to evaluate the competing bodies of evidence for each scenario, a much more critically reasoning mindset would be sure to emerge from the jurors. With two equally compelling scenarios, could a juror be certain of one over the other beyond reasonable doubt? If not, the defendants would deserve their freedom. The judge’s ruling on the pretrial motion was to deny the defense any opportunity to present this compelling case to the jury. Hands were tied. Not a word was to be spoken. The effect of the ruling was dramatic. The subject of that 3rd party scenario was also a key witness in the trial. Should he have been denied an element of credibility? Did he have reason to change his testimony? Should the defendant be made to appear deceptive because his statements conflicted with his 3rd party counterpart? But, Surely, Someone Has to Pay! The idea that a possibly-culpable defendant could be getting away with murder strikes at the core of our personal sense of justice. It takes a civilized mind to control such an impulse. It can be uncomfortable to defer to Blackstone’s ratio: that ten guilty should go free before one innocent should suffer. But it should be more uncomfortable to think that a society would want vengeance irrespective of actual culpability. Yet, that is precisely what they got.
http://www.pinyonpinesmurders.com/wp/denying-potential-3rd-party-culpability/
In contrast to the previously described methods, in Computer-Assisted Learning (CAL), the teacher can use computers at different times and places according to the characteristics of the subject matter, the individual or students, and the available software and hardware. Computer programs can be used for practice, revision, one-to-one instruction, problem solving, or simulations during the applications (Demirel, 1996). In many studies, CAL has been shown to have some benefits, although there are also cases where none were observed. With CAL, there is a form of one-to-one instruction (or two individual together at each computer), plus the opportunity for the individual to proceed at their own pace, repeating parts of the exercise as they wish. None of these features are easily available in a didactic classroom/ learning situation. In addition, there is added variety and, perhaps, novelty in CAL, along with the potential to use vivid and animated graphics, enabling three dimensional aspects, and other features to be viewed more realistically. Of course, not all computer programs have these features, but the potential is certainly there. For understanding to occur, Individual need to have the time to be able to handle new information, to think through ideas and to revisit difficult areas. All of this may reflect features of many computer programs. However, computers lack the human dimension and the ability to provoke thought by spontaneous questions and answers. A good teacher can respond to the way a class is reacting to a lesson by the skillful use of such spontaneous questions and answers. This flexibility is not easy to develop in a computer program and the style of presentation will depend on the ingenuity of the program developer and his/her own understandings of the solar electrification system. In a study that was conducted to find out the effects of the computer on attitudes, motivation or learning, and the possible advantages of computer-assisted test programs (Jackson, 1988), secondary school students were distributed into control and experimental groups. The assessment of the experimental group was done using computers, whereas that of the control group was done through a written test. The statistical evaluations displayed a higher achievement rate for the experimental group that received a computer-assisted test. Levine and Donitsa-Schmidt (1996) compared the traditional learning strategies with computer-based activities. Applications and the assessment were administered after the students were distributed into control and experimental groups. The results of the evaluations showed that the experimental group was more successful at answering the questions of the Physics Engineering Achievement Test than the control group. In another study, Demircioğlu and Geban (1996) compared CAL with the traditional teaching method on 6th grade students in science classes. The students of the experimental group were taught with CAL in addition to the traditional teaching method. The students of the control group were taught through problem solving. The topics were static electricity, electrical transmission, and electrical wires and Ohms laws. The science achievement rates of the two groups were compared through a t-test and the group that was taught through CAI was found to be more successful. System analysis is the process of studying the processors and procedures, generally referred to as systems investigation, to see how they can operate and whether improvement is needed. This may involve examining data movement and storage, machines and technology used in the system, programs that control the machines, people providing inputs, doing the processing and receiving the outputs. The proposed computer program for teaching solar electrification, street light and is intended to be designed using audio-visual package. i. A description note on solar electrification, street light. ii. Practical uses of solar power in street light. iii. Features and effects that will make the program both attractive and user friendly are incorporated. iv. Sounds and video clip are also incorporated.
https://projectstube.com/preview?code=DIP-464234508
- Teach courses pertaining to recreation, leisure, and fitness studies, including exercise physiology and facilities management. Includes both teachers primarily engaged in teaching and those who do a combination of teaching and research. Tasks - Plan, evaluate, and revise curricula, course content, course materials, and methods of instruction. Detailed Work Activities - Advise educators on curricula, instructional methods, or policies. - Develop instructional materials. - Develop instructional objectives. - Order instructional or library materials or equipment. Learn more on the Recreation and Fitness Studies Teachers, Postsecondary occupation report.
https://www.onetonline.org/find/score/25-1193.00?s=Fitness%20instructor
Shoppers are being warned to return a batch of supermarket houmous which could have been infected with salmonella. In total, 17 products are being recalled by the manufacturer Zorba Delicacies Limited, which produces a number of supermarket own-brand houmous ranges. It's feared that the products with these specific use-by dates could pose a health risk. The Food Standards Agency issued the warning on Tuesday, telling consumers: "If you have bought any of the above products do not eat them. Instead, return them to the store from where they were bought for a full refund."
Industrial equipment is one of the most important and expensive assets of an enterprise -- the pillars that support the organizations ability to provide its products or services. Likewise, the purchasing and maintenance of machines, tools and other equipment represent some of the most capital-intensive investments for modern industrial organizations. That’s why organizations invest in implementing asset management and lifecycle assessment processes, which aim at ensuring the proper operation of the equipment and its effective maintenance towards optimizing OEE (Overall Equipment Efficiency) and maximizing the ROI (Return On Investment) on their equipment. It’s a never-ending balancing act between the profitability these assets enable versus the trade-offs associated with the Total Cost of Ownership (TCO) of the equipment, which depends on the quality of its operation, its end-of-life, as well as on its purchasing and maintenance costs. In this two-part series, we'll discuss the alternatives to rip-and-replace, and follow up with decision-making criteria for Asset Management professionals. Sooner or later, enterprises are confronted with an important dilemma regarding the future of their equipment: Should they replace, maintain, or repair and upgrade existing equipment? What’s better from a cost perspective and which option boosts their competitiveness the most? Such dilemmas are more frequent than ever before, given the rise of the fourth industrial revolution (Industry 4.0). Industry 4.0 is largely based on the collection and analysis of large amounts of digital data as a means of driving timely and accurate automation and control. In most cases, this requires machines with digital capabilities (e.g., interfaces for digital data collection and deployment of historian databases), which are lacking from most of the legacy machines. Therefore, the need to deploy digitally enhanced machines is one of the main drivers of the industrial equipment market, as plant operators consider the repairing, upgrading, or replacing older machines with next generation, state-of-the-art, digitally-enabled machines. At the same time, the ability to collect and process digital data from machines can provide manufacturers with additional insights about whether and when to replace machines and tools, in order to optimize OEE. Purchasing new equipment to replace old machines can be an extremely expensive decision, especially when this concerns many assets and/or very expensive assets. This leads to the question: could refurbishment be a viable option? Refurbishment refers to restoring the equipment to its original condition (i.e. to the state that the equipment had when originally manufactured). In some cases, the outcome of refurbishment may even be better-than the original condition. The refurbishment process involves reassembling and replacing components, including mechanical and electrical components, but also other auxiliary components and functionalities (e.g., utilities and tools offered to the operator). It may in some cases involve adding new capabilities, such as Internet. While it’s impossible to make the equipment exactly equivalent to the originally purchased, the goal of the process is to end up with assets that are as close to the original as possible. This is why refurbishment is sometimes called “equipment re-manufacturing”. In practice, the refurbishment process entails the following steps: Nowadays, refurbished machines can be also enhanced with Cyber-Physical Systems (CPS) capabilities based on the deployment of sensors and other connected devices on them. Cyber-Physical Systems (for example, sensor networks attached to the machines and its parts) can collect data about the condition of the machine, etc. This data can be used to estimate the Remaining Useful Life of the machine (or parts of it), which improves refurbishment decision-making. The refurbishment process is usually preceded by an inspection phase, which identifies the particular parts of a machine that need to be manufactured. Such inspections can be carried out by the machine vendor or other refurbishment centers, which use their results to properly plan the re-manufacturing processes, but also to present to the machine owner (i.e. their “customer”) the cost and the implications. Nevertheless, refurbishment frequently entails replacing the majority of a machine’s parts – a costly, if straightforward approach. Replacing selected parts can, in some cases, lead to significant cost savings without any essential loss in the functionality and lifetime of the re-manufactured machinery. No matter whether you’re replacing some or all of the components, it’s vital to ensure that the re-manufacturing process adheres to rigorous standards and quality control procedures. It must include a thorough inspection of the machinery, before, after and during the refurbishment process. Large industrial enterprises that staff their own refurbishment teams tend to follow their own standardized and validated refurbishment processes. On the other hand, organizations without their own refurbishment teams can consider employing the services of certified refurbishment centers with a proven track record that keeps their technicians trained and up-to-date with the evolution of technology. Stay tuned for our follow up article, outlining some of the decision-making criteria Asset Management professionals should consider when reviewing their refurbishment, upgrade, and repair options. John Soldatos holds a Phd in Electrical & Computer Engineering. He is co-founder of the open source platform OpenIoT and has had a leading role in over 15 Internet-of-Things & BigData projects in manufacturing, logistics, smart energy, smart cities and healthcare. He has published more than 150 articles in international journals, books and conference proceedings, while he has authored numerous technical articles and blogs posts in the areas of IoT, cloud computing and BigData. He has recently edited and co-authored the book “Building Blocks for IoT Analytics”.
https://www.prometheusgroup.com/posts/rip-and-replace-do-you-have-any-other-choices-for-your-assets
Week 9, Day 1 of The Artist's Way Today, let's honestly name fear and heal it with love. FEAR ONE OF THE MOST important tasks in artistic recovery is learning to call things—and ourselves—by the right names. Most of us have spent years using the wrong names for our behaviors. We have wanted to create and we have been unable to create and we have called that inability laziness. This is not merely inaccurate. It is cruel. Accuracy and compassion serve us far better. Blocked artists are not lazy. They are blocked. Being blocked and being lazy are two different things. The blocked artist typically expends a great deal of energy—just not visibly. The blocked artist spends energy on self-hatred, on regret, on grief, and on jealousy. The blocked artist spends energy on self-doubt. The blocked artist does not know how to begin with baby steps. Finding it hard to begin a project does not mean you will not be able to do it. It means you will need help—from your higher power, from supportive friends, and from yourself. First of all, you must give yourself permission to begin small and go in baby steps. These steps must be rewarded. Setting impossible goals creates enormous fear, which creates procrastination, which we wrongly call laziness. Do not call procrastination laziness. Call it fear. Fear is what blocks an artist. The fear of not being good enough. The fear of not finishing. The fear of failure and of success. The fear of beginning at all. There is only one cure for fear. That cure is love. Use love for your artist to cure its fear. Stop yelling at yourself. Be nice. Call fear by its right name. (The Artist's Way, 2016, p. 151 – 152) I acknowledge my fears and respond with love. I recognize that my inability to create is not due to laziness, but rather to fear. I give myself permission to take small steps. I reward myself for my progress. I will not call procrastination laziness, but rather fear. I will use love to cure my fear and be kind to myself. I will call fear by its right name and work to overcome it. download printable affirmation card Read your morning pages! This process is best undertaken with two-colored markers, one to highlight insights and another to highlight actions needed. Do not judge your pages or yourself. This is very important. Yes, they will be boring. Yes, they may be painful. Consider them a map. Take them as information, not an indictment. Take Stock: Who have you consistently been complaining about? What have you procrastinated on? What blessedly have you allowed yourself to change or accept? Take Heart: Many of us notice an alarming tendency toward black-and-white thinking: “He’s terrible. He’s wonderful. I love him. I hate him. It’s a great job. It’s a terrible job,” and so forth. Don’t be thrown by this. Acknowledge: The pages have allowed us to vent without self-destruction, to plan without interference, to complain without an audience, to dream without restriction, to know our own minds. Give yourself credit for undertaking them. Give them credit for the changes and growth they have fostered. “Life shrinks or expands in proportion to one's courage.” – Anaïs Nin How can you reward yourself, each day, when taking small steps with a creative project? Share in the comments below 👇 or The Artist's Way private community comments.
https://intentioninspired.com/unlock-creativity-by-overcoming-fear-with-love/
Metastable hydrides are an interesting class of hydrogen carrier since many offer high volumetric and gravimetric hydrogen densities and rapid hydrogen release rates at low temperatures. Unlike reversible metal hydrides, which operate near equilibrium, metastable hydrogen carriers rely on kinetic barriers to limit or prevent the release of hydrogen and can be prepared in a stabilized state far from equilibrium. Despite the advantage of low temperature hydrogen release, this type of one-way thermolysis reaction can be difficult to control since the hydrogen release rate varies with temperature and composition. Here we developed a kinetic rate equation from a series of isothermal measurements, which describes the relationship between temperature, hydrogen release rate and composition for aluminum hydride. This equation is necessary to thermally control the rate of hydrogen release throughout decomposition. This equation was used to run a fuel cell at a controlled rate of ∼1 wt%/hr. Although the equation established in this paper relates specifically to aluminum hydride, the method used is applicable to other metastable hydrides. CITATION STYLE Graetz, J., & Vajo, J. J. (2018). Controlled hydrogen release from metastable hydrides. Journal of Alloys and Compounds, 743, 691–696. https://doi.org/10.1016/j.jallcom.2018.01.390 Mendeley helps you to discover research relevant for your work.
https://www.mendeley.com/catalogue/a38677ea-e532-3d89-bcf8-61b917a6cc54/
The Lances fournies (French: "lances furnished") was a medieval equivalent to the modern army squad that would have accompanied and supported a man-at-arms (a heavily-armoured horseman popularly known as the "knight") in battle. These units formed companies under a captain either as mercenary bands or in the retinue of wealthy nobles and royalty. Each lance was supposed to include a mixture of troop types (the men-at-arms themselves, lighter cavalry, infantry, and even noncombatant pages) that would have guaranteed a desirable balance between the various components of the company at large; however, it is often difficult to determine the exact composition of the lance in any given company as the available sources are few and often centuries apart. A Lance was usually led and raised by a knight in the service of his liege, yet it is not uncommon in certain periods to have a less privileged man, such as a serjeants-at-arms, lead a lance. More powerful knights, also known as a knight bannerets, could field multiple lances. | | Contents The origins of the lance lie in the retinues of medieval knights (Chaucer's Knight in the Canterbury Tales, with his son the Squire and his archer Yeoman has similarities to a lance). When called by the liege, the knight would command men from his fief and possibly those of his liege lord or in this later's stead. Out of the Frankish concept of knighthood, associated with horsemanship and its arms, a correlation slowly evolved between the signature weapon of this rank, the horseman's lance, and the military value of the rank. In other words, when a noble spoke of his ability to field forces, the terms knights and lances became interchangeable. The lance had no consistent strength of arms throughout its usage as a unit. Different centuries and different states gave it a fluctuating character. However, the basic lance of three men; a knight, a squire who served as a fighting auxiliary and a non-combatant squire, primarily concerned on the battlefield with looking after the knight's spare horses or lances, seems to evolve in the 13th. century An excellent description to convey its relevance is in Howard, "a team of half a dozen men, like the crew of some enormous battle tank." The 13th. century French rule of the Templars had specified that a brother knight should have one squire if he had one warhorse, two if he had an extra one. In addition, he had a riding horse and a packhorse. In battle the squires would follow follow the brothers with the spare warhorses. A similar arrangement was also seen in Spain in the 1270's, according to Ramon Llull Neither horse, nor armor, nor even being chosen by others is sufficient to show forth the high honor that pertains to a Knight. Instead he must be given a squire and a servant to look after his horse The Book of the Order of Chivalry or Knighthood The term Lances Fournies itself appeared much the same way as the Compagnies d'ordonnance "Les Lances fournies pour les Compagnies d'ordenance du Roi." or The lances furnished for the Companies ordered by the King. Upon the original establishment of the French compagnies d'ordonnance, the Lances Fournies were formed around a man-at-arms (a fully armored man on an armored horse) with a retinue of a page or squire, two or three archers, and a (slightly) lighter horseman known as the serjeant-at-arms or coutilier (literally "dagger man," a contemporary term for mounted bandits and brigands). All members in a lance were mounted for travel but only the man-at-arms and the coutilier were regularly expected to fight on horseback, though of course both members were also trained and equipped for dismounted action. Lances would be further organized as Companies, each company numbering about 100 lances, effectively 400 plus fighting men and servants. These companies were sustained even in peace, and became the first standing army in modern Europe. The last Duke of Burgundy, Charles the Bold, made a number of ordinances prescribing the organisation of his forces in the 1460s and 1470s. In the first ordinance of 1468, the army is clearly organised in three man lances; a man-at-arms, a coustillier and a valet. In the Abbeville Ordinance of 1471, the army is re-organised into 1250 lances of 9 men each : a man-at-arms, a coustillier, a non-combatant page, three mounted archers and three footsoldiers (a crossbowmen, handgunner, and pikeman). This organisation is repeated in the 1472 and 1473 ordinances. The Duchy of Brittany also ordered the equivalent of the lance in an ordinance of 1450. While the basic lance was the familiar three man structure of man-at-arms, coutilier and page, dependent on the wealth of the man-at-arms, additional archers or juzarmiers (that is, men equipped with a guisarme) were added. At the highest income band specified (600-700 livres), either four archers, or three archers and a juzarmier, were added to the basic unit. In Italy in the 14th and 15th centuries, mercenary soldiers were recruited in units known variously as barbuta, lance or corazza, consisting of two to six men. Although it is traditionally thought that the three man lance was introduced to Italy by the mercenaries of the White Company in the 1360s, in fact they had evolved somewhat earlier The three man lance consisted of two combatants, a men-at-arms and an armed squire, plus a page. Occasionally, a mounted archer could be substituted for the squire. In the mid 15th century, soldiers called lanze spezzate (literally broken lances) evolved. These were men who, for some reason, had become detached from their mercenary companies and their lances and were now hired as individuals. They were then placed in new companies and lances under a new commander. In Germany, an indigenous form of the lance known as a gleve (pl. gleven) developed. The three man gleve may have existed in the early 14th century, with a knight supported by two sergeants. Later the sergeants were replaced by mercenaries. The equivalent of the lance of two combatants with page is seen in Germany in the later 14th century, when the second combatant can be a spearman or an archer. However, in various regions, other sizes of gleven existed of up to ten men, including up to three mounted archers (who would dismount to fight)and armed servants who acted as infantry. Kopia (Polish for lance) was the basic military formation in medieval Poland, identical to the lance-unit employed elsewhere in Western Europe. A Kopia was composed of a knight and his retinue (of 3-12 soldiers). On campaign, several kopias were combined to form a larger unit, the chorągiew. From the 15th century the term kopia was replaced by Poczet. Publicité ▼ Toutes les traductions de Lances fournies Contenu de sensagent Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○ Anagrammes ○ jokers, mots-croisés ○ Lettris ○ Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Copyright Les jeux de lettres anagramme, mot-croisé, joker, Lettris et Boggle sont proposés par Memodata. Le service web Alexandria est motorisé par Memodata pour faciliter les recherches sur Ebay. Traduction Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
http://dictionnaire.sensagent.leparisien.fr/Lances%20fournies/en-en/
The utility model discloses an improved semiconductor refrigerating fan. The improved semiconductor refrigerating fan comprises a semiconductor refrigerator and a box body shell, wherein the semiconductor refrigerator comprises a cold end surface and a hot end surface. The improved semiconductor refrigerating fan is characterized by further comprising a cross-flow air blower. A front panel of the box body shell is provided with an air outlet, and an air supply channel is arranged in the box body shell, wherein one end of the air supply channel is communicated with the air outlet, and the other end is communicated with an air outlet of the cross-flow air blower. The semiconductor refrigerator comprises a cold heat exchanger formed by combining cold exchanging sheets made of a plurality of heat conducting materials, wherein the cold heat exchanger is fixedly connected on the cold end surface of the semiconductor refrigerator and is meanwhile located in the air supply channel. A heat dissipation ventilating opening is formed in a rear panel of the box body shell. A heat exchanger connected with the hot end surface of the semiconductor refrigerator is further arranged in the box body shell, and the heat dissipation ventilating opening is communicated to the heat exchanger. Compared with the prior art, the improved semiconductor refrigerating fan improves cold air efficiency.
Q: Could someone tell me why the program does not return any int k in the end? I am working on this exercise: A sequence of n integers, each no greater than 1000 (by absolute value), is given. Write program divisors, which finds how many of the given integers have the following property: the integer is divided exactly by m positive divisors (each divisor should not be equal to 1, or to the integer itself). The result should look like this: Input: 7 4 6 20 12 64 1024 50 24 Output: 3 Explanation: The elements of the given sequence with exactly 4 divisors are 20, 12 and 50. This is my code: #include <iostream> using namespace std; bool divisors(int a, int m) { int br = 0; for (int i = 2; i < a; ++i) { if (a % i == 0)br++; if (br == m) return 1; else return 0; } } int main() { int n; // the number of elements int m; // the number of divisors int a, i, k = 0; cin >> n >> m; for (i = 1; 1 <= n; i++) { cin >> a; if (divisors(a, m)) k++; } cout << k << endl; return 0; } A: for (i = 1; i <= n; i++) { // You want to compare with i, not with 1 cin >> a; if (divisors(a, m)) k++; }
With the dynamic changes that took place in the Indian legal system, there was a need to develop the reliance of the 'electronic evidence' which includes insight regarding the admissibility of such evidence, and the interpretation of the law in relation to the manner in which electronic evidence can be brought and filed before the Hon'ble courts of law. Briefly knowing, the Digital evidence or electronic evidence is any probative information stored or transmitted in digital form that a party to a court case may use at trial. Before accepting digital evidence, it is vital that the determination of its relevance, veracity and authenticity be ascertained by the court and to establish if the fact is hearsay or a copy is preferred to the original. Digital Evidence is 'information of probative value that is stored or transmitted in the binary form'. Evidence is not only limited to that found on computers but may also extend to include evidence on digital devices such as telecommunication or electronic multimedia devices. The e-EVIDENCE can be found in e-mails, digital photographs, ATM transaction logs, word processing, documents, instant message histories, files saved from accounting programs, spreadsheets, internet browser histories databases, Contents of computer memory, Computer backups, Computer printouts, Global Positioning System tracks, Logs from a hotel's electronic door locks, Digital video or audio files. Digital Evidence tends to be more voluminous, more difficult to destroy, easily modified, easily duplicated, potentially more expressive and more readily available.[iii] Indian Evidence Act and Electronic Evidence: Tracing Relation between them The definition of evidence as given in the Indian Evidence Act, 1872 covers a) the evidence of witness i.e. oral evidence, and b) documentary evidence which includes electronic record produced for the inspection of the court.[iv]Section 3 of the Act was amended and the phrase 'All documents produced for the inspection of the Court' was substituted by 'All documents including electronic records produced for the inspection of the Court.[v] Regarding the documentary evidence, in Section 59, for the words 'Content of documents' the words 'Content of documents or electronic records' have been substituted and Section 65A & 65B were inserted to incorporate the admissibility of electronic evidence. Traditionally, the fundamental rule of evidence is that direct oral evidence may be adduced to prove all facts, except documents. The hearsay rule suggests that any oral evidence that is not direct cannot be relied upon unless it is saved by one of the exceptions as outlined in sections 59 and 60 of the Evidence Act dealing with the hearsay rule. However, the hearsay rule[vi] is not as restrictive or as straightforward in the case of documents as it is in the case of oral evidence. This is because it is settled law that oral evidence cannot prove the contents of a document, and the document speaks for itself. Therefore, where a document is absent, oral evidence cannot be given as to the accuracy of the document, and it cannot be compared with the contents of the document. This is because it would disturb the hearsay rule (since the document is absent, the truth or accuracy of the oral evidence cannot be compared to the document). In order to prove the contents of a document, either primary or secondary evidence must be offered. While primary evidence of the document is the document itself[vii], it was realized that there would be situations in which primary evidence may not be available. Thus, secondary evidence in the form of certified copies of the document, copies made by mechanical processes and oral accounts of someone who has seen the document, was permitted under section 63 of the Evidence Act for the purposes of proving the contents of a document. Therefore, the provision for allowing secondary evidence in a way dilutes the principles of the hearsay rule and is an attempt to reconcile the difficulties of securing the production of documentary primary evidence where the original is not available. Section 65 of the Evidence Act sets out the situations in which primary evidence of the document need not be produced, and secondary evidence - as listed in section 63 of the Evidence Act - can be offered. This includes situations when the original document - Is in hostile possession. - Or has been proved by the prejudiced party itself or any of its representatives. - Is lost or destroyed. - Cannot be easily moved, i.e. physically brought to the court. - Is a public document of the state? - Can be proved by certified copies when the law narrowly permits; and - Is a collection of several documents?[viii] This is the way how the admissibility and proving of different forms of evidence took place. To find a clear-cut explanation regarding admissibility of electronic evidence, a peer review needs to be made on the already existing provisions and judicial precedents in relation to the same. Supreme Court and Electronic Evidence (Documents): Its Critical Appraisal As documents came to be digitized, the hearsay rule faced several new challenges. While the law had mostly anticipated primary evidence (i.e. the original document itself) and had created special conditions for secondary evidence, increasing digitisation meant that more and more documents were electronically stored. As a result, the adduction of secondary evidence of documents increased. In the landmark case of Anvar P.K. v. P.V. Basheer[ix], the Supreme Court noted that 'there is a revolution in the way that evidence is produced before the court. In India before 2000, electronically stored information was treated as a document and secondary evidence of these electronic 'documents' was adduced through printed reproductions or transcripts, the authenticity of which was certified by a competent signatory. The signatory would identify her signature in court and be open to cross-examination. This simple procedure met the conditions of both sections 63 and 65 of the Evidence Act. In this manner, Indian courts simply adapted a law drafted over one century earlier in Victorian England. However, as the pace and proliferation of technology expanded, and as the creation and storage of electronic information grew more complex, the law had to change more substantially. Under the provisions of Section 61 to 65 of the Indian Evidence Act, 1872, the word 'Document or content of documents' have not been replaced by the word 'Electronic documents or content of electronic documents'. Thus, the intention of the legislature is explicitly clear i.e. not to extend the applicability of section 61 to 65 to the electronic record. It is the cardinal principle of interpretation that if the legislature has omitted to use any word, the presumption is that the omission is intentional. It is well settled that the Legislature does not use any word unnecessarily.[x] In this regard, the Apex Court in Utkal Contractors & Joinery Pvt. Ltd. v. State of Orissa[xi] held that '...Parliament is also not expected to express itself unnecessarily. Even as Parliament does not use any word without meaning something, Parliament does not legislate where no legislation is called for. Parliament cannot be assumed to legislate for the sake of legislation; nor indulge in legislation merely to state what it is unnecessary to state or to do what is already validly done. Parliament may not be assumed to legislate unnecessarily.' In Som Prakash vs. State of Delhi[xii], the Supreme Court has rightly observed that 'in our technological age nothing more primitive can be conceived of than denying discoveries and nothing cruder can retard forensic efficiency than swearing by traditional oral evidence only thereby discouraging the liberal use of scientific aids to prove guilt.' Statutory changes are needed to develop more fully a problem-solving approach to criminal trials and to deal with a heavy workload on the investigators and judges. In SIL Import, USA v. Exim Aides Exporters, Bangalore[xiii], the Supreme Court held that 'Technological advancement like facsimile, Internet, e-mail, etc. were in swift progress even before the Bill for the Amendment Act was discussed by Parliament. So, when Parliament contemplated notice in writing to be given, we cannot overlook the fact that Parliament was aware of modern devices and equipment already in vogue.' Moreover, recently the Apex Court in the landmark judgment of Shafhi Mohammad v. State of H.P.[xiv] at page 808 quoted '21.We have been taken through certain decisions which may be referred to. InRam Singhv.Ram Singh[Ram Singhv.Ram Singh, 1985 Supp SCC 611], a three-Judge Bench considered the said issue. English judgments inR.v.Maqsud Ali[R.v.Maqsud Ali, (1966) 1 QB 688 : (1965) 3 WLR 229 : (1965) 2 All ER 464 (CCA)] andR.v.Robson[R.v.Robson, (1972) 1 WLR 651 : (1972) 2 All ER 699 (CCC)] and American Law as noted inAmerican Jurisprudence2d (Vol. 29) p. 494, were cited with approval to the effect that it will be wrong to deny to the law of evidence advantages to be gained by new techniques and new devices, provided the accuracy of the recording can be proved. Such evidence should always be regarded with some caution and assessed in the light of all the circumstances of each case. Electronic evidence was held to be admissible subject to safeguards adopted by the Court about the authenticity of the same. In the case of tape-recording, it was observed that voice of the speaker must be duly identified, accuracy of the statement was required to be proved by the maker of the record, possibility of tampering was required to be ruled out. Reliability of the piece of evidence is certainly a matter to be determined in the facts and circumstances of a fact situation. However, threshold admissibility of an electronic evidence cannot be ruled out on any technicality if the same was relevant….. 29.The applicability of procedural requirement under Section 65-B(4) of the Evidence Act of furnishing certificate is to be applied only when such electronic evidence is produced by a person who is in a position to produce such certificate being in control of the said device and not of the opposite party. In a case where electronic evidence is produced by a party who is not in possession of a device, applicability of Sections 63 and 65 of the Evidence Act cannot be held to be excluded. In such case, procedure under the said sections can certainly be invoked. If this is not so permitted, it will be denial of justice to the person who is in possession of authentic evidence/witness but on account of manner of proving, such document is kept out of consideration by the court in the absence of certificate under Section 65-B(4) of the Evidence Act, which party producing cannot possibly secure. Thus, requirement of certificate under Section 65-B (4) is not always mandatory. 30.Accordingly, we clarify the legal position on the subject on the admissibility of the electronic evidence, especially by a party who is not in possession of device from which the document is produced. Such a party cannot be required to produce certificate under Section 65-B (4) of the Evidence Act. The applicability of requirement of certificate being procedural can be relaxed by the court wherever interest of justice so justifies. Conclusion It is clear that the admission of electronic evidence is the norm across all jurisdictions, rather than the exclusion. Along with advantages, the admissibility of electronic records can also be complex – although some jurisdictions have imposed the requirements regarding admissibility as in India. It is, thus, upon the 'keepers of law', the courts to see that the correct evidence is presented and administered so as to facilitate the smooth working of the legal system. Sound and informed governance practices along with scrutiny by the courts must be keenly observed todetermine whether the evidence fulfils the three essential legal requirements of authenticity, reliability and integrity. Hopefully, with the Supreme Court having re-defined the rules, the Indian courts will adopt a consistent approach and will execute all possible safeguards for accepting and appreciating electronic evidence. And for the same, it is rightly said, 'When making decisions about people, stop confusing experience with evidence. Just as owning a car doesn't make you an expert on engines, having a brain doesn't mean you understand psychology.'[xv] HEAD-NOTES [i] Shodhganga, Introduction: Need for Enactment of Information Technology Act 2000, SHODHGANGA (Mar. 23, 2018, 02:56 PM), http://shodhganga.inflibnet.ac.in/bitstream/10603/7829/16/16_chapter%207.pdf. [ii] Adv. Prashant Mali, Electronic Evidence/ Digital Evidence & Cyber law in India, LINKED IN (Mar. 23, 2018, 02:45 PM), https://www.linkedin.com/pulse/electronic-evidence-digital-cyber-law-india-adv-prashant-mali-. [iii] Vivek Dubey, Admissibility of Electronic Evidence: An Indian Perspective, MEDCRAVE (Mar. 23, 2018, 03:09 PM), http://medcraveonline.com/FRCIJ/FRCIJ-04-00109.pdf. [iv] The Indian Evidence Act, 1872, No. 1, Acts of Parliament, 1872, Section-3. [v] As amended by Information Technology Act, 2000, No. 21, Acts of Parliament, 2000, Section-92. [vi] The Legal Blog, 'Hearsay' Evidence: The Law, LEGAL BLOG (Mar. 23, 2018, 04:23 PM), http://www.legalblog.in/2011/01/hearsay-evidence-law.html. [vii] The Indian Evidence Act, 2000, Section-62, supra 10. [viii] Stephen Mason, supra 7. [ix] Anvar P.K. v. P.V. Basheer, (2014) 10 S.C.C. 473 (India). [x] Prashanti, E-Evidence in India, LEGAL SERVICES INDIA (Mar. 23, 2018, 05:59 PM), http://www.legalservicesindia.com/. [xi] Utkal Contractors & Joinery Pvt. Ltd. v. State of Orrisa, A.I.R. 1987 S.C. 1454 (India). [xii] Som Prakash vs. State of Delhi, AIR 1974 SC 989 (India). [xiii] SIL Import, USA v. Exim Aides Exporters, Bangalore, (1999) 4 S.C.C. 567 (India). [xiv] Shafhi Mohammad v. State of H.P., (2018) 2 S.C.C. 801 (India). [xv] Adam Grant, Evidence Quotes, BRAINY QUOTES (Mar. 25, 2018, 07:28 PM), https://www.brainyquote.com/quotes/adam_grant_834260?src=t_evidence.
http://www.lawyersclubindia.com/articles/Admissibility-of-Electronic-Evidence-under-Indian-Laws-A-Brief-Overview-10310.asp
MANILA: Ferdinand Marcos Jr., the son of Philippines President Ferdinand Marcos, said on Tuesday (July 5) that he wanted relations with Beijing to be about more than just their dispute over the South China Sea, and he emphasized the importance of multilateral engagement in dealing with problems. Before Chinese Foreign Minister Wang Yi’s visit to Southeast Asia, Marcos told a press conference that the two countries’ relationship had more than one dimension. By working to “resolve the difficulties that we have,” Marcos said he hoped that relations could be restored after years of maritime disputes, with space for new areas of cooperation, such as military exchanges. Related Posts At the same time, Marcos must strike an uneasy balance between expanding trade with China and preserving tight relationships with the United States, a former colonial power that still wields considerable influence in the Philippines. There have been indications that the new Philippine president intends to strengthen relations with China, but at the same time, has sworn to protect the country’s national interests. When it came to the South China Sea dispute, Marcos pledged to take a bilateral approach with China throughout his campaign. There has to be more global interaction, particularly the Association of Southeast Asian Nations (ASEAN), he said on Tuesday. It was also stated that “essential actors in regional geopolitics… because they are stakeholders” in the Asia-Pacific area were leaders.
https://www.theasianaffairs.com/philippines-marcos-thinks-china-relations-go-beyond-conflict/
POSNER, Circuit Judge. Recreational frequenters of the Shawnee National Forest in southern Illinois have appealed from the denial of their request for a preliminary injunction. The Forest Service authorized a sale of timber, to be harvested by the method of logging known as "group selection," from a 661-acre tract, called "Fairview," of the 260,000-acre national forest. The suit charges that the sale violates federal law, in particular the National Environmental Policy Act, 42 U.S.C. §§ 4321 et seq., and the National Forest Management Act, 16 U.S.C. §§ 1600 et seq. The former statute requires a federal agency to prepare an environmental impact statement before the agency undertakes a "major" action having a "significant" impact on the environment, 42 U.S.C. § 4332(2)(c), while the latter requires contracts for the exploitation of the national forests' timber resources to conform to the Forest Service's land management plans. 16 U.S.C. § 1604(i). The Service had issued such a plan for the Shawnee National Forest in 1986, and with it a statement of the plan's environmental impact that the plaintiffs acknowledge complied with the National Environmental Policy Act. The plan divided the forest into areas called "Management Prescriptions." Fairview is in Management Prescription 3.2, and the plan authorizes logging there by the method known as "even-aged management," and specifically by clear-cutting. (If all the trees in a tract are cut down at once, the new trees that grow in their place will be of roughly the same age.) But the plan also authorizes Concerned by the amount of clear-cutting authorized by the 1986 plan, conservation-minded users of Fairview sought review of the plan in accordance with procedures that the Forest Service has established. The administrative proceeding was dropped when the Service agreed to amend the plan to limit the amount of clear-cutting allowed. But the settlement agreement (which, incidentally, the plaintiffs in this case refused to sign) disclaims any purpose of preventing the plan from becoming final and effective, as it has since become. The amended plan envisaged by the settlement, along with a statement of the environmental impact of the amendments, is in the works but has not yet been completed; nevertheless the Forest Service has suspended clear-cutting in Management Prescription 3.2 indefinitely. Earlier this year the Forest Service revived a previous plan for logging in Fairview. A private logger would be permitted to conduct group selection by clearing the trees on patches ranging from one-quarter acre to two acres in size scattered throughout the Fairview area; added together the patches would come to 26 acres. After receiving written comments from the later-to-be plaintiffs in this case and others, the supervisor of the Shawnee National Forest issued a written decision authorizing the project to go forward. The decision indicates that the forest supervisor believes the project to be consistent with Management Prescription 3.2, and hence not to violate the National Forest Management Act, because group selection is necessary to achieve the management plan's visual quality objectives. The decision does not say this in so many words, but the implication is unmistakable. Group selection is said to have been chosen because it "responds to public concerns about the effects of clearcutting on ... forest services," and clear-cutting rejected because it "would not meet the established visual quality objectives of partial retention." Management Prescription 3.2 allows group selection in lieu of clear-cutting when it is necessary to meet visual quality objectives, and evidently the forest supervisor thought it was. He could have said this more clearly, but a reviewing court may — without violating the rule of SEC v. Chenery Corp., 332 U.S. 194, 67 S.Ct. 1575, 91 L.Ed. 1995 (1947), against the court's supplying a rationale for the agency's decision — "uphold a decision of less than ideal clarity if the agency's path may reasonably be discerned." Bowman Transportation Co., Inc. v. Arkansas-Best Freight Co., Inc., 419 U.S. 281, 286, 95 S.Ct. 438, 442, 42 L.Ed.2d 447 (1974); see also Colorado Interstate Gas Co. v. FPC, 324 U.S. 581, 595, 65 S.Ct. 829, 836, 89 L.Ed. 1206 (1945); Consolidated Gas Transmission Corp. v. FERC, 771 F.2d 1536, 1550 n. 18 (D.C.Cir.1985); Ceramica Regiomontana, S.A. v. United States, 810 F.2d 1137 (Fed.Cir.1987) (per curiam). That undemanding standard is satisfied here — especially when allowance is made for the fact that the decision is that of a local forest supervisor rather than of the members of a sophisticated agency in Washington. It was in a case involving such an agency, the Federal Power Commission, that the Supreme Court declined to remand for further findings because even though the Commission's findings "leave much to be desired," Colorado Interstate Gas Co. v. FPC, supra, 324 U.S. at 595, 65 S.Ct. at 836, "the path which it followed can be discerned." Id. It can be discerned here. A remand for better findings would serve the plaintiffs' interest in delaying the timber sale, but no other interest, The forest supervisor suggested another reason for authorizing group selection in Fairview. It would be a boon to "shade intolerant" trees (trees that need more sun than they can get in a dense forest — the Forest Service's proclivity to employ a wooden vocabulary is an unintentional irony in this case) because it would create open areas through which the sun, when it is not directly overhead, would stream into the surrounding woods. It is these very trees — the oaks and hickories — that the logger is after. The patches that he will be clearing if we let him are the ones in which those trees are concentrated; apparently, group selection, which is more costly than clear-cutting, pays only when the groups selected include the commercially more valuable trees. After stating that the project will enable the Forest Service's employees at Shawnee National Forest to gain experience with group selection, the forest supervisor's decision concludes with a "finding of no significant [environmental] impact." The basis for this finding is an "environmental assessment" that the forest supervisor had prepared shortly before. An environmental assessment is a rough-cut, low-budget environmental impact statement designed to show whether a full-fledged environmental impact statement — which is very costly and time-consuming to prepare and has been the kiss of death to many a federal project — is necessary. River Road Alliance, Inc. v. Corps of Engineers, 764 F.2d 445, 449 (7th Cir.1985). ("Rough-cut" and "low budget" are relative terms: the environmental assessment in this case is 112 pages long — 4.3 pages per acre.) Since the forest supervisor did not believe that the logging contract would have a significant environmental impact, he did not think an environmental impact statement required. Incidentally, the environmental assessment also notes that one of the major concerns with clear-cutting is its effect on visual quality and that unevenaged management helps "maintain aesthetic values." This is further evidence that the forest supervisor's decision was indeed based, in part anyway, on a belief that the substitution of group selection for clear-cutting in Fairview was necessitated by concern about visual quality. After exhausting their administrative remedies by appealing the forest supervisor's decision to the regional forester, 36 C.F.R. § 217.7(b)(1), the plaintiffs brought this suit, which the district judge referred to a magistrate for a brief evidentiary hearing after which the judge denied the plaintiffs' request for a preliminary injunction. And this is the first puzzle about the case: why the district judge thought it proper to order an evidentiary hearing. When persons harmed by administrative action bring a suit for injunction in a federal district court, it is not because they want, or are entitled to, a trial. It is because when Congress has not prescribed the mode of judicial review of a particular type of administrative decision, here a decision by the Forest Service, the presumption is not that judicial review is unavailable but that it is available by proceeding in federal district court under 28 U.S.C. § 1331, which gives those courts power to entertain suits arising under federal laws. Abbott Laboratories v. Gardner, 387 U.S. 136, 142-43, 87 S.Ct. 1507, 1512, 18 L.Ed.2d 681 (1967); Citizens to Preserve Overton Park v. Volpe, 401 U.S. 402, 410, 91 S.Ct. 814, 820, 28 L.Ed.2d 136 (1971); Rockford League of Women Voters v. Nuclear Regulatory Comm'n, 679 F.2d 1218, 1220 (7th Cir.1982). In such a suit the district court is a reviewing court, like this court; it does not take evidence. Florida Power & Light Co. v. Lorion, 470 U.S. 729, 743-44, 105 S.Ct. 1598, 1606-07, 84 L.Ed.2d 643 (1985); United States v. Carlo Bianchi & Co., 373 U.S. 709, 714-15, 83 S.Ct. 1409, 1413, 10 L.Ed.2d 652 (1963); St. James Hospital v. Heckler, Confining the district court to the record compiled by the administrative agency rests on practical considerations that deserve respect. Administrative agencies deal with technical questions, and it is imprudent for the generalist judges of the federal district courts and courts of appeals to consider testimonial and documentary evidence bearing on those questions unless the evidence has first been presented to and considered by the agency. Trees may seem far removed from the arcana of administrative determination, but one has only to glance at the documents submitted in this case to realize that "silviculture" is in fact a technical field, and not just one with a dry and forbidding vocabulary. Therefore only if there is no record and no feasible method of requiring the agency to compile one in time to protect the objector's rights — in short, only (to repeat) if there is an emergency — should an objector be allowed to present evidence in court showing why the agency acted unlawfully. And this was not such a case. The forest supervisor made a variety of interpretive and factual determinations in a substantial written opinion and even lengthier environmental assessment, and he did so after the plaintiffs had submitted their own voluminous evidentiary materials to him. It is unclear whether the supervisor would have granted the objectors an oral hearing if they had requested one, or even whether they did request one, but these questions are immaterial. The plaintiffs are unable to show that the paper hearing that the supervisor did conduct was inadequate to develop the facts necessary to a sound decision. The Forest Service did not object to the evidentiary hearing before the magistrate, or otherwise invoke the principle that limits the reviewing court, even when it is a district court, to the administrative record save in exceptional circumstances not present here. And by not objecting it waived its right to object on this appeal to that hearing. But the waiver of a right to object to an evidentiary hearing on a motion for preliminary injunction does not carry over to the subsequent proceeding for deciding whether to enter a permanent injunction. Should there be such a proceeding in this case, therefore, it would be open to the Forest Service to argue that the district judge must base his decision whether or not to enter a permanent injunction on the administrative record, or at least must not permit the introduction of any additional evidence besides what was before the magistrate. And for reasons stated earlier, the judge would have to accept the argument. To understand fully why we have limited the scope of the Forest Service's waiver of objection to evidentiary proceedings in the district court in this way, one first must understand that the ultimate question to be decided when a plaintiff moves for a preliminary injunction is whether granting, or denying, the motion is the decision that will minimize the costs of error arising from the fact that the motion must be decided without a full hearing on This is such a case. Because the plaintiffs are not entitled to present evidence in court to challenge the forest supervisor's decision (and for the reasons discussed earlier, this point has not been waived so far as further proceedings in the district court are concerned), there will never be an evidentiary hearing in court. All that the plaintiffs are entitled to do in the courts is to try to persuade the district judge and then us, on the basis of the evidence that was before the forest supervisor when he made his decision, that the decision is unlawful. The record is complete; there is no point in any further proceedings in the district court; we may therefore consider the case as if the district judge had entered judgment for the Forest Service at the end of the entire litigation. The point is critical because the plaintiffs have the better of the argument on the issue of the respective irreparable harms to the parties if there is an interval between preliminary and final disposition. The plaintiffs are users of the Fairview area of the Shawnee National Forest; they like it in its present, natural state; and trees cut down this fall will not have grown back to their present height till most of the plaintiffs are dead. The quantification of such a harm is difficult but on the other side there is — nothing. The Forest Service's anticipated revenues from the logging are a few thousand dollars, and the irreparable harm from the delay of the project till a full hearing in the district court would be at most the time value of the profit component of that revenue, a value which no one has bothered to quantify and which probably is trivial. A harm so purely pecuniary — so readily quantifiable — from the grant of a preliminary injunction could in any event be prevented by requiring the plaintiff to post an injunction bond or equivalent security in accordance with Rule 65(c) of the Federal Rules of Civil Procedure, which makes such security mandatory, although a number of environmental decisions, illustrated by People of California ex rel. Van de Kamp v. Tahoe Regional Planning Agency, 766 F.2d 1319, 1325-26, amended on other grounds, 775 F.2d 998 (9th Cir.1985) (per curiam), and Friends of the Earth, Inc. v. Brinegar, 518 F.2d 322 (9th Cir.1975) (per curiam), and our own Scherr v. Volpe, 466 F.2d 1027, 1035 (7th Cir.1972), waive the requirement or allow the posting of a nominal bond. The opinion of Justice (now Chief Justice) Rehnquist in PACCAR contains language that could be thought to require that in cases in which a preliminary injunction is sought against a government agency, the sliding-scale approach that this and other courts use in deciding on the propriety of granting such an injunction is improper, at least to the extent that it will in some cases authorize the granting of the injunction even if the plaintiff is unable to show that he probably will win the trial on the merits. 424 U.S. at 1307, 96 S.Ct. at 848. It is unclear that this implication was intended; in any event this circuit's position is that the sliding-scale cases (Roland, American Hospital Supply, Lawson, and the rest) are applicable even when the defendant is a government agency. Busboom Grain Co. v. ICC, 830 F.2d 74, 75 (7th Cir.1987). Our discussion of the standard for preliminary injunctions may seem a bit to one side of this case, since the harms that will ensue while a case is awaiting the trial on the merits are irrelevant when there will never be a trial because the district court's function is merely to review the record compiled in an administrative proceeding. And since there is no interval between preliminary and final consideration in such a case, there is no reason to have a separate preliminary-injunction phase in it. But that is in general, not in every case: (1) If the administrative record is so vast or complicated that the district judge cannot analyze it and make his final decision in time to avert harm to the plaintiff due to delay, then the plaintiff can move for a preliminary injunction. That would be the kind of case in which a court of appeals might grant a stay of administrative action if that court was the first tier of judicial review of the agency's action, rather than the district court. 5 U.S.C. § 705; Virginia Petroleum Jobbers Ass'n v. FPC, 259 F.2d 921 (D.C.Cir.1958); Busboom Grain Co. v. ICC, supra, 830 F.2d at 75. The standard is the same whether a preliminary injunction against agency action is being sought in the district court or a stay of that action is being sought in this court. Id. (2) Likewise the plaintiff can seek a preliminary injunction against the execution of the administrative decision if the record is incomplete when suit is filed and if, as in (1), time is pressing. This may have been the reason for the hearing before the magistrate in this case. Apparently, certain key documents that were part of the record of the administrative action, such as the environmental impact statement for the 1986 management plan, were not submitted to the court until that hearing. The record is now complete, however. (3) Or, what is closely related, a preliminary injunction may be proper if the case is one of the unusual administrative review cases in which an evidentiary hearing is necessary in order to reconstruct the ground or contents of the agency's decision It is no accident that these reasons why a court reviewing an administrative decision might grant a preliminary injunction in the exceptional case are the same reasons why a reviewing court might hold an evidentiary hearing in the exceptional case; the underlying criterion is the same in both cases — an emergency that prevents leisurely consideration by the reviewing court of an adequate administrative record. But they are not reasons for a further evidentiary hearing in the district court after the administrative record is complete and both the district court and this court have had time to study it. Should we be wrong, however, it would make no difference in this case. The plaintiffs are unable to indicate to us what further evidence they might introduce in the district court that would strengthen their legal claims, which we find very weak although we share the plaintiffs' desire to preserve some forest covering in this almost bald state. As a general rule (we shall take up the exceptions shortly), once an environmental impact statement has been issued for a project, the project can be carried out without the agency's having to issue a new statement for every stage of the project. Marsh v. Oregon Natural Resources Council, 490 U.S. 360, 109 S.Ct. 1851, 1858-59, 1864, 104 L.Ed.2d 377 (1989); Headwaters, Inc. v. Bureau of Land Management, 914 F.2d 1174, 1178 (9th Cir.1990). Otherwise the project could never be completed. It would be the application to law of Zeno's Paradox (how can one cross even a finite interval in a finite amount of time, when any interval can be divided into an infinite number of segments each of which must be crossed in turn in order to traverse the entire interval?). So if group selection in Fairview is authorized by the Shawnee National Forest plan for the area called Management Prescription 3.2, then, at least as a first approximation, the Forest Service was not required to prepare such a statement and the National Environmental Policy Act, the source of such requirements, was not violated. Nor the National Forest Management Act, which merely required that the logging contract comply with the management plan. The plaintiffs read the plan to say that if the Service does not want to use clear-cutting it must demonstrate that group selection will make the forest look better than individual-tree selection or no logging at all. This of course would be an impossible burden, because a forest does not look better with bald patches than without. We read the plan differently, as did the forest supervisor and also the magistrate and the district judge. We read it to say that clear-cutting is permissible throughout Management Prescription 3.2 (and therefore throughout Fairview) unless the results would be too unsightly (fail to achieve "visual quality objectives"), in which event the Service can authorize a less unsightly form of logging, such as group selection. The plan authorizes clear-cutting and the settlement left the plan intact until it is amended, which it has not been. If in this interim period the Forest Service decides to authorize a less unsightly form of logging than the clear-cutting that it is allowed to authorize, this does not violate the plan. The plaintiffs' able counsel in this court (they had no counsel until the appeal) have mounted ingenious arguments aimed to show why clear-cutting might be better than group selection after all — such as that clear-cutting is done all at once, whereas group selection might entail numerous separate intrusions of loggers and their equipment into Fairview because the 26 acres authorized for group selection are in scattered, noncontiguous patches. The arguments are implausible coming from plaintiffs who until this lawsuit denounced clear-cutting as the worst method of logging from the standpoint of preserving forests in their natural state, which is the plaintiffs' goal. And if the frequency with which the loggers go on the land is the critical consideration, then not group selection but individual tree selection is the worst form of logging, although the plaintiffs like it the best. Anyway such arguments As for the plaintiffs' argument that Management Prescription 3.2 did not contemplate widespread use of group selection and that 26 acres out of 661 is widespread, the second half of this argument is a good deal more persuasive than the first. Depending on the precise scatter of the patches throughout the Fairview area, they could well give it a scarred, ragged appearance that would justify calling the use of group selection "widespread" throughout the area. Since the patches are as small as a quarter of an acre, there could be dozens of them in what is after all little more than one square mile. But the management plan would have authorized the Forest Service to clear the whole area, and we find nothing in the plan to limit the amount of group selection that can be substituted for clear-cutting for the sake of visual quality, feebly as that quality may seem to be served by the substitution. We said there were exceptions to the principle that a fresh environmental impact statement, with all the costs and delay entailed thereby, is unnecessary for a mere stage in a project for which such a statement had been prepared and found adequate under the National Environmental Policy Act. The Supreme Court held in Marsh that if because of changed circumstances or new information the old statement is not adequate to assess the environmental impact of the new project, there must be a new statement (109 S.Ct. at 1858-59; see also 40 C.F.R. § 1502.9(c)(1)) — provided of course that the project meets the criteria for an environmental impact statement. This one does not, and we therefore need not decide whether, if it did, the standard of Marsh for when the agency must prepare a new statement would be satisfied. The environmental assessment that the forest supervisor prepared demonstrates, with adequate support in the record, that group selection in Fairview is not a major federal action likely to have a significant environmental impact. Only 26 acres of trees are at stake and the forest supervisor was explicit that the sale of these trees is not a precedent, an open sesame on future sales either from Fairview or from other parts of the Shawnee National Forest — a forest of ten thousand times 26 acres. The sale will have some impact because the clearing of the trees in swatches ("group selection") will make the Fairview area uglier than it is now, but less ugly than it would be if clear-cutting were employed, as the plan authorized. The incremental adverse environmental impact is negative. Individual tree selection might seem best from an environmental standpoint, but this is irrelevant to the question whether group selection is more harmful to the environment than the clear-cutting that the plan, duly accompanied by a proper environmental impact statement, authorized. Yet it is at least interesting to note that the supervisor thought individual tree selection worse for the environment than group selection. He was concerned about the lack of sunlight for the oaks and hickories, a lack that individual tree selection would not correct. If this seems a hypocritical concern because the oaks and hickories are the very targets of the logging operations that he has authorized, still we cannot say that oaks and hickories in the woods adjacent to the cleared patches will not benefit from the additional sunlight. Moreover, "hypocritical" is too strong. The Forest Service is not forbidden to consider the benefits to loggers, and hence to the consumers of wood products, in deciding how to manage our national forests. The national forests, unlike the national parks, are not wholly dedicated to recreational and environmental values. And this means that it was not, as the plaintiffs argue, improper for the forest supervisor to consider the value to the Forest Service of experimenting with group selection in the Shawnee National Forest. In talking up the virtues of clear-cutting From the standpoint of the National Environmental Policy Act the essential point is that the sale will not create new environmental effects, effects not envisaged when the environmental impact statement was prepared in 1986. But the other points in the preceding paragraph are not irrelevant to this case. They show, contrary to another argument made by the plaintiffs, that the forest supervisor was not acting arbitrarily in authorizing this limited experiment with group selection; his decision is no more vulnerable on that ground than on the ground that it violates the National Environmental Policy Act or the National Forest Management Act. Other objections to the forest supervisor's decision are pressed but they have no possible merit and we mention one of them only because the plaintiffs emphasize it so. That is whether the decision gives adequate consideration to the impact of group selection on the possibility of creating a Forest Interior Management Unit in Fairview. This dreary bureauphorism denotes a habitat for neotropical birds, which is to say birds that winter in South America (warblers are an example). These birds require a large area of continuous tree cover, which group selection would ruin — though clear-cutting even more so. If the plan for the Shawnee National Forest is amended to prohibit clear-cutting, as the plaintiffs hope, then if group selection is also barred maybe a habitat for these migratory birds can be created in or rather including Fairview (Fairview itself is too small for such a habitat). And a regulation under the National Environmental Policy Act forbids an agency to take any action that would "limit the choice of reasonable alternatives" while a decision for which an environmental impact statement is required is under consideration. 40 C.F.R. § 1506(a)(2). The decision in question is the decision to adopt an amended management plan and, as we noted earlier, has not yet been made. The settlement agreement pursuant to which the plan is being amended designates twenty areas in the Shawnee National Forest as Forest Interior Management Units, and none of them includes any part of Fairview. This is some evidence that the inclusion of Fairview in such a unit would not be a "reasonable alternative." On the basis of this and other evidence the forest supervisor concluded that the regulation was inapplicable, and we cannot say that his decision was arbitrary or otherwise in error. The judgment of the district court denying the plaintiffs' motion for a preliminary injunction is affirmed; further proceedings in the district court — if any — shall conform to the principles set forth in this opinion. AFFIRMED.
https://www.leagle.com/decision/19901358919f2d43911272
While the Camp David Accords were negotiated for a few days in the summer of 1978, they are actually the result of several months of diplomatic efforts that began when Jimmy Carter took over the presidency in January 1977, after defeating Gerald Ford. The agreement recognized the “legitimate rights of the Palestinian people” and was to implement a process that would guarantee the full autonomy of the people within five years. Bégin insisted on the adjective “full” to ensure that it was the most accessible political law. This full autonomy should be discussed with the participation of Israel, Egypt, Jordan and the Palestinians. The withdrawal of Israeli troops from the West Bank and Gaza was agreed following the election of an autonomous authority to replace the Israeli military government. The agreements did not mention the Golan Heights, Syria or Lebanon. It was not the global peace that Kissinger, Ford, Carter or Sadat had in mind during the previous stint as Us president. It was less clear than the Sinai Accords and was later interpreted differently by Israel, Egypt and the United States. The fate of Jerusalem was deliberately excluded from this agreement. More importantly, the United Nations never formally accepted the first agreement of the agreements, the “Framework for Middle East Peace,” because it was written without Palestinian representation and participation. Just days after his speech, the two sides began informal and sporadic peace talks that would eventually culminate in the signing of the Camp David Agreement, the first formal agreement between Israel and an Arab nation. Tensions in the Middle East have continued unabated since the war between Israel and Egypt in 1967. In November 1967, the United Nations Security Council adopted Resolution 242. The resolution demanded the withdrawal of Israeli forces from the territories acquired during the war and the end of any claim or state of war between all nations or states in the region. Egypt`s recognition of Israel`s right to a peaceful existence and the return of the country acquired during the Six-Day War remained prerequisites for peace in the region. After the Yom Kippur War in October 1973, the Security Council adopted Resolution 338, which called on the parties to enter into negotiations for a “just and lasting peace”. Nevertheless, Egypt and Israel have reached agreement on a number of previously controversial issues. The resulting Camp David Accords essentially contained two separate agreements. The first, titled “A Framework for Peace in the Middle East,” called for fifty-two U.S. diplomats and citizens to take hostage for 444 days at the U.S. Embassy in Tehran in the longest recorded hostage situation. The students were supporters of the Iranian revolution and took hostages to protest the US hosting of Shah Mohammed Reza Pahlavi, accused of numerous violent crimes against Iranian citizens. Following several failed rescue attempts, the death of Pahlavi in Egypt and the Iraqi invasion of Iran (beginning of the Iran-Iraq war), Iran was forced to negotiate his release; the crisis ended with the signing of the Algiers Agreement on 20 January 1981. But other parties had their own intentions to accept and support Sadat`s visit to Israel.
http://haz-matresponse.com/wp/the-camp-david-accords-were-agreements-between-iran-and-iraq/
This project is designed as a process study focusing on discovering, uncovering, developing and sharing the most effective innovative practices and ways of creating learning spaces and stimulating learning behaviour in european education today, with a keen eye on future needs and demands. In collaboration with 10 european front runner schools and institutions we seek to tease out, through case studies, interviews, and workshops, what works for children and teachers and what new methods and approaches are being tried out. We will also raise awareness and discuss themes that parents, teachers, and school leaders face when they wish to create a different learning environment and need to tackle local and state regulations. All is to be captured in a book, co-written by the collaborating school, institutions and educators. The aim of the book is to share stories, approaches, tools, methods and mindsets relevant for today's educators and changemakers in europe, and perhaps beyond. In recent years a number of scientists, educators and researchers have pointed out that the current education system in the western world is designed to operate for the industrial age, which makes it unable to educate students to deal effectively with complexity, ambiguity and transformation - the factors that characterizes our dynamic modern society. Therefore, if we want to help our children create and develop the skills needed in the approaching future, we have to be willing to try out practices and ways of learning and teaching that will result in fundamental shift away from the way we understand and do educate children today. The aim of this project is to support a future society in touch with its evolutionary purpose. A future where educations and organizations are more holistic, purposeful, human centered and seeks their highest potential. Karin Ivertsen Sidsel Andersen This project is rooted in a collaboration between Sidsel and Karin who both, as part of their working and private lives, are curious about the dilemmas and opportunities in modern education and keen to understand what is being done and could be done better in learning institutions Do you know a school, institution or teacher that we should talk with? Dont hesitate to contact us: We believe it is needed to create a fundamental shift in the way we educate - ourselves and others. We want to find out: “How schools and teachers are dealing withHow we can develop schools to become healthy, courageous and creative learning communities?” “How can we preserve the good that is there already, build on to this, and take it to the next level needed for these times of change we live in?” "What are the templates and ideas out there, that can be used by others to improve and propel future education development?"
https://www.sidselandersen.com/responsiveness-in-education
EWMI’s crosscutting approach to promoting justice, civic engagement, and economic development programs often leads to the identification and nurturing of specialized program areas. EWMI is currently focusing on the following global priority issues as part of our Special Initiatives. Leveraging information and communication technologies across its civil society, justice and economic sector programs, EWMI prioritizes transparency as central to building stronger democratic societies. EWMI’s civic engagement programs have prioritized information and communication technologies as key drivers of open societies for years. Program activities promote a free and open Web and provide assistance to leading civil society groups to improve their impact, extend their reach, and enhance their security through technology. In 2012, EWMI created an “open data” shared network in Cambodia which unites disparate data collection efforts by individual groups advocating for social and environmental justice and allows them to share it. The platform addresses urgent areas of concern in a transparent and politically neutral way. In the area of court administration, EWMI has been an international development leader in the delivery of technology solutions to justice institutions and the citizens who count on them. Through the design and installation of specialized software in scores of courts, and the training of thousands of judiciary professionals, EWMI’s appropriate technologies have answered the call of court managers seeking greater efficiency and accountability in the administration of justice. The human rights of many of the planet’s poorest people are indivisible from the natural resources upon which their lives depend. A forest not-destroyed supports the livelihoods of thousands of families, protects the water and food resources of whole countries, and helps preserve the global environment. Building upon the civil society work and grassroots initiatives it has supported in the past, EWMI’s environment program is designed to work with local citizens to find sustainable and community-based solutions to urban and natural environmental conservation issues. The program brings together our civil society, access to justice and private sector work in exciting and effective ways. We are strengthening communities and giving them the tools and support to interact with government and private companies to preserve their assets and create a sustainable outcome for all. EWMI takes a participatory approach to local environmental conservation and provides capacity building programs, technical assistance, and funding to support network-building and grassroots advocacy initiatives at the local, national, and regional levels. Since sustainable solutions must also ensure the business-government and business-community sides of the triangle, our economic development and corporate sustainability expertise is increasingly important. For more than ten years, EWMI has brought a rights-based approach to global health issues. EWMI programs in nearly a dozen countries address areas that are too often overlooked by traditional health programs, focusing on the inequities that can stall the resolution of some of the most intractable health issues. These inequities include discrimination which can block the delivery of health services to those in greatest need, unlawful business practices which allow counterfeit and substandard drugs to flood markets with impunity, and the violation of regulations which allows injury and disease linked to environmental causes. For example, in South and Southeast Asia, EWMI leverages the role of the justice sector to help take down barriers impeding the vital progress in the fight against diseases such as polio, malaria, and TB. Drawing on its years of experience with the justice sector, EWMI is targeting these risks to global health equity through new programs and approaches around the world. Working in post-conflict societies such as Bosnia, Cambodia, Kosovo, Serbia and Sri Lanka, conflict mitigation and peace-building strategies have always been integrated into EWMI’s access to justice and civil society projects. Recently, EWMI’s work with communities affected by conflict in Fiji and Liberia has brought conflict mitigation into focus. EWMI is promoting dialogue and tolerance in Fiji by bringing together community leaders, CSOs, religious leaders, and government officials to discuss key issues such as human rights, interethnic relations, and constitutional reform. In Liberia, EWMI implemented a two-year mediation and community outreach program in which EWMI and its key local partner, Prison Fellowship Liberia (PFL), educated community leaders and the public on the benefits of mediation to mitigate conflicts, provided mediation services to community members, and used mediation to address the overuse of extended pre-trial detention in the country. EWMI continues to support PFL’s efforts in this area, and continues to support efforts to build sustainable peace in other post-conflict societies.
http://www.ewmi.org/SpecialInitiatives
It’s challenging not to talk about Alaska’s national parks in superlatives—after all, the 49th state claims 60 percent of all land protected by the U.S. National Park Service. Alaska’s total national parkland protects more than 41 million acres (roughly the size of Wisconsin) and encompass biomes ranging from temperate rain forests to arctic tundra. Within that acreage are the four largest national parks in the United States (Wrangell–St. Elias, Denali, Gates of the Arctic, and Katmai), the 10 highest peaks in the country, the longest tidewater glacier (the 76-mile-long Hubbard Glacier in Wrangell–St. Elias), and more. Alaska’s national parks also have the distinction of being the most remote (only three of the eight are accessible by road; the other five require a boat or air taxi) and the least visited (five of the top 10 least-visited national parks are in Alaska). Where to begin in this statistically remote—but profound—wilderness? During seven years living in Alaska, I visited each of its magnificent national parks (multiple times in the case of Kenai Fjords, Denali, and Wrangell–St. Elias) to learn about what makes each unique. Read on for our guide, so you can choose the best national park in Alaska to visit on your trip. Denali National Park and Preserve - Why go: To marvel at the highest peak in North America - Nearest town: Healy, population 1,096, 12 miles away (although there are seasonal shops, dining spots, and hotels a mile from the park entrance, in an unofficial community called “the Canyon” by locals) The most remarked-upon part of Denali is the 20,310-foot mountain for which the park was named (known to the Indigenous Athabascans as the Great One), but the 6-million-acre national park encompasses much more. What makes Denali so well loved is how egalitarian its adventures are. If your idea of a good visit involves scanning the landscape for Alaska’s Big Five (bear, moose, Dall sheep, caribou, and wolf) from the comfort and safety of a bus while a certified guide dispenses nuggets of trivia, Denali can deliver. If you’d rather spend days bushwhacking through a complex boreal forest, hunting for an inspiring place to unfurl your sleeping bag, Denali can provide that, too. Other activities include hiking, rafting, and flightseeing (it’s even possible to land on Denali). It’s important to note that there’s only one road into Denali National Park. While the Denali Park Road is 92 miles long, you can’t drive your personal vehicle past Milepost 15. Beyond there, you need to be on a narrated tour or hiker shuttle. And during the 2022 season, the road will be closed after Milepost 43 due to a landslide. Where to stay in Denali National Park and Preserve There are six campgrounds within Denali, but given the park’s popularity, it can be challenging to get a reservation. If you have wilderness know-how, you can get a permit and wild camp most places in Denali’s vast backcountry. If you’d rather not rough it, there are several accommodations just north of the park entrance (McKinley Chalet Resort and Denali Bluffs Hotel are favorites). Alternatively, two lodges are actually within the park, although due to their locations, both are fly-in only at this time. Historic Kantishna Roadhouse is 90 miles into the park, while luxurious Sheldon Chalet is high up on the mountain (overlooking Ruth Glacier), just 10 miles from the summit. Katmai National Park and Preserve - Why go: To photograph brown bears (there are more than 2,000) trying to eat their fill of salmon before winter - Nearest town: King Salmon, population 327 Katmai, on a remote peninsula in southern Alaska, holds more than 4 million acres, but what draws the most attention is a single six-foot-tall and 250-foot-wide waterfall. Brooks Falls is famous for two things: hungry brown bears and spawning salmon. Each summer, hundreds of thousands of salmon try (and try again) to jump the falls en route to their spawning grounds farther upstream. Consequently, large populations of brown bears gather here to snatch the fish out of the turbulent waters or paw them out of midair, all in an effort to bulk up for hibernation. In peak season (usually late June through late July), as many as 50 bears may perch on the lip of the falls at any given time, and an estimated 300 salmon may attempt the leap every minute. Visitors can watch the spectacle from raised wooden platforms nearby. Another area of interest is the Valley of 10,000 Smokes. In 1912 the Novarupta volcano erupted, turning some of what is now Katmai into a landscape of smoking valleys, steam vents, lava flows, and ash-buried mountains. The post-apocalyptic terrain is the reason Katmai became protected land—the geothermal features are an important living laboratory. Operators like Brooks Lodge and Katmailand offer hiking tours of the Valley. Where to stay in Katmai National Park: Visitors to Katmai usually come for a day (Rust’s Flying Service, Regal Air, and others offer trips from Anchorage), though it is possible to stay the night. There are two options for camping in Katmai. The first is Brooks Camp. It’s on the shores of Naknek Beach; there aren’t designated sites, but there is a 60-person limit, and reservations tend to fill up months in advance. The second option is backcountry camping. No permit is necessary, but you must have all the gear you need to overnight safely in bear country, which is a lot. For more peace of mind, there’s Brooks Lodge, the only proper hotel in the park, with 16 rooms, all of which have two bunk beds. Wrangell–St. Elias National Park - Why go: To feel like a true explorer - Nearest town: McCarthy, population 28 The biggest national park in the United States is vastly underrated. Its 13.2 million acres (larger than roughly 70 of the world’s independent nations!) encompass everything from glaciers and tundra to temperate rain forests and volcanoes. For adventure lovers, this is the land of promise. The opportunities for exploration—be it hiking, biking, climbing, rafting, fishing, and beyond—are limitless. Ice climbing is a big draw for adrenaline junkies. It makes sense, considering that more than a third of the park is covered by glaciers (the largest of which is bigger than Rhode Island). Most of those glaciers feed into braided rivers and streams, so packrafting—similar to kayaking, but in an inflatable boat—is wildly popular in Wrangell–St. Elias, too. Even if you aren’t an avid outdoors person, one more thing might coax you into making the trip: history. Within 35 years in the early 1900s, nearby Kennecott went from a boomtown pumping out copper to a ghost town. Today, park rangers lead guided tours of the iconic red mill buildings and the surrounding townsite. Where to stay in Wrangell–St. Elias National Park Within the park some local favorites include Ma Johnson’s Historic Hotel in McCarthy, Kennicott Glacier Lodge in Kennicott, and the opulent 20-bedroom backcountry lodge Ultima Thule (which means “a distant or unknown region”). The latter may offer the most opportunities for exploring the park. Every stay comes with a private pilot and a Piper Super Cub plane so guests can spend their days packrafting on an alpine lake, hiking across glaciers with guides, picnicking in abandoned gold mines, or whatever other adventure they can dream up. Kenai Fjords National Park - Why go: To witness Alaska’s glaciers while you can - Nearest town: Seward, population 2,812 Kenai Fjords is one of Alaska’s most accessible parks—it’s a 2.5-hour drive from Anchorage, Alaska’s largest city. This national park got its name for its towering fjords, but it’s better known for the slow-moving glaciers that chiseled them over the course of many millennia. The 700-square-mile, 23,000-year-old Harding Ice Field—and the more than 40 glaciers it currently feeds—is actually what earned the area national park distinction in 1980. More than 50 percent of Kenai Fjords is under ice. Exit Glacier sees the most visitors, largely because it’s the only one you can (almost) drive to. From the parking lot, it’s a two-mile round-trip walk to the toe of the glacier. It used to be closer—there are trail markers along the route that denote where the face of the glacier once sat, showing just how much the river of ice has melted over the years due to climate change. Another popular way to experience the park is by water. Day cruises (either half- or full-day) depart from the harbor in Seward from mid-March to mid-October. It’s a lovely way to spend a day; pleasure cruisers glide past seals sprawled out on rocky outcroppings, humpback whales seem to defy gravity as they breach, and tidewater glaciers spit growlers (small chunks of ice) and icebergs (massive chunks of ice) into the water below. Where to stay in Kenai Fjords National Park: The good news: There’s a campground at Exit Glacier that’s free. The bad news: There are only 12 spots, and they’re on a first-come, first-served basis, and people can stay up to two weeks, so it’s unlikely you’ll get a space. Your best bet is to find a place to stay in Seward. In downtown, try Hotel Seward. Harbor 360 Hotel overlooks the harbor, and if you opt for a day cruise, it will likely be steps away. Seward Windsong Lodge is a little outside of Seward, but it’s easily one of the best accommodations in the area. Gates of the Arctic National Park and Preserve - Why go: For the bragging rights of having explored the least-visited U.S. national park - Nearest town: Coldfoot, population 268 (though from here you’ll need to book an air taxi into the park) Imagine an area about the combined size of Connecticut and Vermont. Now take away all infrastructure and the vast majority of the people. Then transport it to the tundra and add wild rivers, ribbons of unnamed granite peaks more than 7,000 feet tall, vast valleys, and herds of more than 200,000 caribou (as well as a large musk ox population and more than 145 species of birds). That should give you some idea of what Gates of the Arctic is like. Even though Gates of the Arctic is the second-largest park in the nation, it sees fewer people than any other protected land—visitorship is usually between 5,000 to 12,000 people each year. That’s not because it’s not worthwhile—it’s just exceedingly remote. No roads reach Gates of the Arctic, so visiting requires multiple planes. There also isn’t a single paved road, no maintained trails, and nary a designated campsite. This is true wilderness. Given its far north location, daylight hours vary widely throughout the year. In the summer, the sun scarcely sets. In the winter, the landscape is lit only by the Northern Lights. Most visitors come with wilderness guides—this isn’t a place you can wing it, even if you are proficient in the outdoors. Where to stay in Gates of the Arctic There are no lodges, cabins, or allotted campsites in Gates of the Arctic. Where you stay depends on where you find to pitch your tent. Some (relatively) nearby options that involve wooden walls instead of nylon tents include Coldfoot Camp and Bettles Lodge. The latter also offers guided backpacking trips in the Gates of the Arctic. Kobuk Valley National Park - Why go: Hike the jewel-toned tundra, climb the Baird Mountains, or marvel at the sand at the Great Kobuk Sand Dunes - Nearest town: Kotzebue, population 3,283 Truly the only thing that has changed this landscape over time is nature. As with Gates of the Arctic, Kobuk Valley has no roads, trails, or infrastructure. Which is to say: The wilderness here is magnificent and unspoiled. The lack of roads means the area sees one of the last great large mammal migrations on Earth. Every year, half a million caribou travel between their calving and summering grounds in what is now Kobuk Valley National Park. It’s a ritual that has been going on for more than 10,000 years. While that migration happens on the tundra, it’s not the only biome within the park. You may be surprised to learn that North America’s largest Arctic dune field, Great Kobuk Sand Dunes, is here. The 20,500 acres of rolling dunes were formed over thousands of years as glaciers dragged across the landscape, pulverizing the rocks beneath them into sand. It’s a landscape so otherworldly that NASA has used it for training programs (and to better understand the environment of Mars). Where to stay in Kobuk Valley National Park Your only option is backcountry camping—there are no designated campgrounds or lodges. The National Park Service recommends contacting the Northwest Arctic Heritage Center for tips on the best places to make camp. There are a handful of options in Kotzebue, the closest town and likely where you’ll start and end your trip, including Nullagvik Hotel and Bibber’s B&B. Another option is to stay at Kobuk River Lodge in Ambler or Bettles Lodge in Bettles. Both are a flight away from Kotzebue but can arrange day trips into Kobuk Valley with local guides. Glacier Bay National Park and Preserve - Why go: Jagged peaks, cerulean glaciers, and marine animals, including sea otters and humpback whales - Nearest town: Gustavus, population 493 Glacier Bay is dreamy. Here seven tidewater glaciers flow from the mountaintops and calve millennia-old ice into the sea. While that dramatic spectacle earned the park its name, it’s hardly all it entails. The park spans 3.3 million acres, encompassing a craggy coastline, protected fjords, snow-capped peaks, emerald-green forests, a Huna Tribal House, and such wildlife as mountain goats, porpoises, and sea birds. The park is a favorite destination for cruise ships—there’s something both exciting and haunting about creeping through a waterway filled with icebergs of various shapes and sizes to get close to the face of the glacier. Granted, it’s important to remember that this park has witnessed one of the most dramatic examples of climate change: In the past 200 years, the ice has receded more than 65 miles. Where to stay in Glacier Bay National Park Interestingly, most people who visit Glacier Bay do so by ship and never set foot on parkland. But it is possible. As with many other protected lands in Alaska, accommodations can be challenging to come by and aren’t often in the park itself. From May 1 to September 30, it’s possible to camp within the park, either at the Bartlett Cove Campground or in the backcountry, provided you file for a permit and go through an orientation, in person or online. Another option in the park is Glacier Bay Lodge. The 56-room hotel is set among Sitka spruce trees, just feet away from the park headquarters. For something more luxurious, you may consider Gustavus Inn. It’s 10 miles from Bartlett Cove (by road) on a beautiful homestead. Lake Clark National Park - Why go: Brown bears—glorious, rotund brown bears - Nearest town: Port Alsworth, population 200 Lake Clark is an enticing cocktail of glacier-covered mountains, two active volcanoes (Mount Iliamna and Mount Redoubt), scraggly coastline, and salmon-rich rivers that have long drawn a healthy population of brown bears to their shores. Like Katmai, Lake Clark is renowned for bear viewing, and many day-trippers shoot in to fill their camera memory cards with the animals feasting on fish. But you’d be remiss not to spend at least some time at the 42-mile turquoise lake for which the park was named. It’s wreathed by towering mountains and is an excellent spot to look for moose, fox, and Dall sheep or spend an afternoon fishing. Know that the park is only accessible by plane. Operators like Rust’s Flying Service and Regal Air, among others, offer day trips to Lake Clark from Anchorage. Itineraries usually include roughly four hours on the ground and a packed lunch. However, backcountry lodges on the park’s outskirts also offer (longer) day trips for their clients. Where to stay in Lake Clark National Park Lake Clark has a surprising number of lodging options, given how remote it is. Like most national parks in Alaska, campers can stake their tent almost anywhere they please within the park. The National Park Service also operates a handful of (very rustic) public use cabins. In the nearby town of Port Alsworth, there’s Alaska Backcountry Fishing Lodge, the Farm Lodge, Tulchina Adventures, and Wilder House Bed & Breakfast, to name a few places to stay.
https://www.afar.com/magazine/essential-alaska-national-parks-to-visit
The long-awaited Birds of Loudoun: A Guide Based on the 2009-2014 Loudoun County Bird Atlas is here! Beginning with an introduction to atlasing and a brief lesson on Loudoun’s geography, the book then dives into the results of the five-year Bird Atlas where 85 volunteer atlasers reported 262 species over the span of 5,900 field hours. Rare and exciting finds are highlighted (for example, the first confirmation of breeding Hooded Mergansers in Loudoun!) and comparisons are drawn between this dataset and data from the 1985-1989 Virginia Breeding Bird Atlas. Which of Loudoun’s birds have thrived amidst all the changes the county has undergone over the past 25 years? Which need our help the most? Loudoun’s most species-rich areas are revealed, with possible explanations for these somewhat surprising findings. The bulk of the book consists of accounts for each documented atlas species, generally a page in length for breeding birds and half a page for migrants and winter birds. The accounts provide information regarding the appearance, habitat, breeding behavior (when applicable), and conservation status. Written by 10 local birders, the accounts emphasize the species’ connections to Loudoun and include a distribution map. The general occurrence of the species in Loudoun is provided, indicating how likely the bird is to be detected in appropriate habitat in the correct season. Accounts of breeding birds highlight the earliest and latest dates that breeding was confirmed during the Bird Atlas. Non-breeding (winter and migrant) accounts highlight the earliest and latest seasonal sightings or northerly/southerly migration periods as documented during the Bird Atlas. The accounts are brought to life with stunning photographs taken by 18 mostly local photographers. Whenever possible, changes from the 1985-1989 Virginia Breeding Bird Atlas are discussed and trends from over 20 years of Central Loudoun Christmas Bird Counts are noted. Data from other local sources such as Snickers Gap Hawkwatch, Loudoun Wildlife’s bluebird trail monitoring program, and Banshee Reek’s banding station are also incorporated, along with larger scale trends from the North American Breeding Bird Survey and Partners in Flight. Birds of Loudoun concludes with a list of great places to bird throughout the county, highlighting possible species at each location. Suggestions for putting the data into action are included, along with recognition of the many volunteers that donated their time and talent to this substantial endeavor. Sample species accounts can be viewed here, along with a Birds of Loudoun Checklist and other atlas information. Orders for the book can be placed here for $34.95. Any questions, including wholesale pricing, can be directed to Michael Myers at [email protected]. Please join us on Sunday, April 28, 3 p.m. at the Stone Barn at Morven Park for the book launch. Registration preferred but walk-ins welcome. Happy reading and happy birding!
https://loudounwildlife.org/2019/05/bird-lovers-the-wait-is-over/
Last week, I was honored to facilitate a workshop on best practices in crowd funding in Pittsburgh, PA organized by The Bayer Center for Nonprofit Management at Robert Morris University in conjunction with its TechNow Conference (where I delivered the keynote). I’ve already shared a blog post about the content related to “Best Practices for Nonprofit Crowd Funding” that includes case studies, examples, tips, and resource links. Instructional design is more than just delivering the content, so this post shares some of the thinking that went into designing a learning experience where people will apply what they learned. As a trainer and subject matter expert, I fight a big battle when designing a workshop. Balancing the amount of content delivery with exercises and right sizing given the available time for the workshop. Given how busy nonprofit professionals are, most face-to-face workshops are typically a half-day and sometimes a whole day. That means you have approximately 3 hours and the whole time should not be spent lecturing with a PowerPoint deck! I believe that workshops are an opportunity for nonprofit staffers to have some “thinking time” — to reflect and think about how the content applies to their specific situation. The learning objectives for the workshop were two fold: - Participants understand best practices and how to apply - Develop a first draft of a crowd funding strategy for their organization While I did a pretty thorough participant assessment survey before finalizing the content, the instructional design and creating materials, I always like to get a group understanding of the learning goals and get people ready to learn. Here I used a classic, simple technique of asking people to share burning questions in small groups and generated a list of the themes/questions with the full group. I’m always relieved when the questions match the content, but you also have to point out where you will go deep and what may not be addressed. (Always good to leave time for Q/A at the end for those). The content focused on telling a couple of “campfire” stories with insights about best practices. This section was followed by taking people through a worksheet that breaks down the step by step of thinking through a campaign – both examples and small group or individual exercises. This was the first 90 minutes of the workshop, and while content and interaction kept in active learning mode — the next step was a synthesis. This where design thinking methods and innovation lab facilitation techniques can be helpful. I thought the most helpful synthesis task would be a “design rationale.” If crowd funding is a new idea or innovation for an organization, it can fail to become a practice simply because the participant is not able to convey the concept’s full potential. So, while participants could have taken great notes and reported back to others at their organizations, having those notes plus being able to articulate a concept of how their organization implement a crowd funding campaign might just have a better chance of implementation back at the office. I opted for a human centered design technique called “Concept Poster,” a presentation format that illustrate the main points of a new idea. The format of the concept poster was more open then the close-ended set of exercises and questions on the worksheet. They need a tagline, illustration, key points, and additional detail. The concept poster can help someone synthesize their ideas, but they also have to be able to explain it to others – both visually and verbally. So, I gave participants quiet time to work a concept poster and time for rapid presentations of the posters. The Concept Posters are also the artifacts of learning! First, if the participant takes their poster, notes, and worksheets, they can go back to their organization and share their ideas with others. It can be the first campaign strategy session. Second, being a data geek – these artifacts of learning can help me assess participant understanding of the concepts of crowd funding – and how to improve the content for better understanding. Since I’m still iterating on this workshop, I also incorporated a simple verbal feedback exercise at the end. While you also want to always do an evaluation survey so you can quantify the participant’s assessment of their learning and feedback, I find verbal feedback is like doing a mini-focus group – getting feedback from participants right afterwards is always extremely valuable. I got some fantastic ideas for modifying the content and exercises and timings. The technique I use for getting verbal on a workshop is to ask what would participants keep, change, or delete? You have to avoid being defensive and remain neutral, especially if you hear that something you thought was brilliant was not perceived by participants as brilliant. You have to listen to every comment and accept it as a gift with enthusiasm and a smile. That is hard to do but a good professional skill in many other contexts than training. Here are some tips. If you feel that you might have a hard time not getting defensive or you have time for a more in-depth debrief, you can do a written exercise and ask participants to reflect on what they learned and how they will apply it. To do this analysis, you need to ask for both positive and constructive criticism. The technique is called Plus/Delta. Here are some examples of the questions you can ask adult learners to write down and then discuss in small groups. You can also handout index cards and ask people to write “positives” on one side and “please change” on the other. Or, have them fold 8 x 11.5 paper into quarters, number the spaces, and answer the questions or draw pictures. - What really struck you as interesting, new, provocative, or meaningful during this workshop? - What is one change that you can make in your practice or one idea that you will put into practice as a result of this workshop? - What part of the workshop was most useful to your work? - What part of the workshop should be changed to improve learning? You can ask people to share it anonymously. Sometimes I ask them to include their names and I do a drawing for a free book during the learning culmination exercise (see below.) After the session, I spend a few hours doing an analysis of what people wrote on the cards and use it to review with my lesson plan. Incorporating exercises and ways for participants to have some thinking and synthesis time in your work can help improve the learning and subsequent application. Also, if you want to get better at being a trainer, you need to get feedback from your participants. While evaluation surveys are great, they are only one form of feedback. How do you incorporate ways for participants to synthesize their learning? How do you get feedback from participants to improve what you’re doing? Share your story in the comments.
https://bethkanter.org/thinking-feedback/
How to maintain the fine line between independence and over-parenting As a parent, there are few times in life quite as emotional as sending your child to college. In addition to feeling sentimental about the fleeting moments you have left together, you’re likely worried about whether or not he or she is prepared for this next transition – or better yet, if you are. The good news is that you’re not alone. Most parents struggle to navigate this challenging time in a way that sets their child up for success while still maintaining a reasonable level of parental oversight. My wife and I sent two children to college, and while it definitely came with some ups and downs, we survived – and you will, too. There are, however, a few things you can do to help ensure a smoother ride for everyone. Life Skills: All kids should be equipped with some basic life skills when they enter college. If you feel like you’ve missed the boat on these, there’s still time. Here’s what to focus on right away: - Laundry – Your child’s dorm will not come with a maid. Before he or she flies the coop, it’s time for a crash course in Laundry 101. - Banking – No matter who’s paying for your child’s education and expenses, all college students should understand how to manage money. If you haven’t already done so, set up a bank account for your child and explain how to maintain it. Develop a system of accountability regarding spending, and maybe even encourage your child to take on a part-time job. - Cooking and cleaning – If your student doesn’t plan to eat every meal in the cafeteria, it’s time to learn how to prepare at least a couple of easy recipes. Likewise, if your child is living in a suite-style dorm, he or she will need some tools for cleaning the bathroom. - Car maintenance – If your child will have a car on campus, be sure to review the basics, including changing the oil and fluids and buying gas. Communication: While your son or daughter may not currently be interested in your opinion, that could change in a new environment. When he or she reaches out, resist the urge to provide a solution. Instead, exercise your new role as a listener and a sounding board. You may want to agree on a communication schedule ahead of time. Perhaps you can chat or Facetime once a week. If your child attends school close to home, be sure to respect the boundaries. Despite how lonely you may be, your child needs space to develop friendships and experience independence. Resourcefulness: A major complaint among employers of recent grads is their inability to problem solve. In a society where helicopter parenting is the norm, many children lack the ability to be resourceful. When a problem arises, resist the temptation to solve it for your child. Instead, point out the many resources at his or her disposal. Today, colleges provide students with an abundance of amenities, from medical clinics to tutoring centers and counseling services. Self Care: For the child who is used to mom and dad taking care of most things, college life can be a bit of a shock. In the beginning, you may need to do a little handholding. Unlike high school, which maintains a fairly rigid schedule, college life offers freedoms that most kids are not accustomed to. Talk with your son or daughter about time management and scheduling, especially when it comes to studying, exercise, leisure and sleep. You should also have a frank discussion about safety and responsible alcohol use. Girls, in particular, should know how to protect themselves from date-rape situations and should be well versed on the campus transportation offerings. Not sure how to talk with your child about this sensitive subject? Here’s a great resource. Although this new transition may be tough, try to focus on the positives. Be proud of your parenting efforts and continue to support, love and listen to your child during this next phase of life. David Lowenstein, Ph.D. is a psychologist and the clinical director of Lowenstein & Associates, Inc. in Columbus, Ohio. In addition to providing therapeutic services to individuals and families, he offers training and consultation to numerous associations, schools and agencies around the country. Additionally, he is a frequent radio and TV guest and a resource and contributing writer for numerous newspapers and magazines nationwide. Contact Dr. David Lowenstein at 691 South Fifth Street, Columbus, Ohio, 43206, or call 614.443.6155 or 614.444.0432.
http://drlowenstein.com/2018/08/21/sending-your-child-to-college/
The British Academy for Training and Development present this training course in The Theory of Realism in International Relations and its Perspectives on Global Interests, which represents one of the major theories of International Relations, which is also deemed as a wide field of study for scientists wishing to complete their specialization in International Relations. Despite the constant debate surrounding the Realism Approach in political analysis, the Realism School fingerprints remain present in both political systems and political perspectives. There are many theories considering International Relations aspects. The Classical Realism Theory is one of the most striking and most appropriate methods for the new global political order. It allows scientists to properly consider and interpret comprehensive perspectives of events, with an understanding of international transactions, while defining interactions among countries. International Relations are based on solid connections and mutual interdependence between countries. Each state considers only its own strength and security because it is in a system based on self-defense. It seeks prestige among other states in the universal political regime, especially the independence of its decision, based on the compatibility of interests between countries. Modern International Economic Relations have widely changed in line with the vast modern alteration of the world order, so that every economic entity is largely integrated and interconnected with others in the global economic system The British Academy for Training and Development offers this course to the following categories: After completing the program, participants will be able to master the following topics:
https://www.batdacademy.co.uk/course_details/1507/The-Theory-of-Realism-in-International-Relations-and-its-Perspectives-on-Global-Interests
Our client is a global, research-driven pharmaceutical company focusing on the development of prescription-only-medicines, and they are a global market-leader within their therapeutic area. The company headquarter is in Greater Copenhagen, Denmark with subsidiaries, production facilities and distributors worldwide. THE POSITION Our client is looking to bring onboard a high-caliber International GCP Auditor/Clinical Quality Manager (Lead Auditor) to join the GQCM team. The role will be responsible for performing investigator site audits and supplier audits as well as internal GCP audits, independently or in close collaboration with other Lead Auditors at our client. As QA partner to the Global Pharmacovigilance and Clinical Development organization, the role will take the lead on the planning, execution, reporting and follow up on audits while ensuring effective supplier management hereunder qualification of external collaborators and suppliers. POSITION PROFILE Position title: Lead Auditor, GCP Reporting line: Senior Director, Global Quality Clinical & Markets Location: Greater Copenhagen Traveling: Apr. 50 days Responsibilities & Tasks: • Conducting internal GCP audits of Clinical Development systems and processes. • Conducting external GCP audits of vendors, investigational sites and CRO’s to ensure that our client's studies are being performed and documented in accordance with the principles of GCP. • Coordinating and analyzing audits conducted by third parties. • Facilitating closure of audit findings, assisting with root cause analysis (RCA) and action plan development, driving corrective actions to completion and conducting effectiveness checks. • Representing and supporting the Global Pharmacovigilance and Clinical Development organization during regulatory inspections. • Supplier management - hereunder qualification of external collaborators and suppliers. • Ensuring regulatory surveillance for GCP and impact to the Quality and auditing area hereunder providing guidance/direction to company Global Clinical Development staff on pertinent GCP standards to enable development of SOPs and compliant systems that fulfil GCP requirements for International and local Regulatory Agencies as well as ethics committees. • Assisting with the development and conduct of GCP training for in-house and clinical investigational staff. • Assisting with the coordination and implementation of inspection readiness activities. • Advising Global Clinical Development on the implementation of effective, compliant, standardized processes and systems (QMS) that enable the company's clinical investigation- studies to be performed in compliance with relevant country Good Clinical Practice (GCP) Regulations. • Drafting inspection readiness plans and coordinating implementation of inspection readiness activities. Key success criteria: • Seamless collaboration and interaction with internal and external stakeholders. • Successful planning, conduct, reporting and follow up of internal and external GCP audits. • Completing tasks and responsibilities to the highest professional standards while ensuring that all work is accomplished with quality and in accordance with company values. CANDIDATE PROFILE Educational background: Preferable M.Sc.-level degree within nature science or like (not a requirement). Language: English – fluent (verbally and written). Ideal experience and competencies: • 5+ years of experience within clinical development regulated areas including experience with Quality Management Systems and tools. • Preferably experience as Lead Auditor within clinical development and/or pharmacovigilance (internal audits of HQ and affiliates and external audits of collaborators and suppliers). • Sound understanding of Good Clinical Practices as interpreted by the global regulatory authorities. Personal competencies: Team player Is a flexible and empathetic team player. Sees opportunities rather than limitations. Collaborates, shares information and works well with others with a view to obtain team objectives. Communication Possesses excellent oral and written communication skills in English. Communicates the central issues in a discussion in a clear, fluent and precise manner, while being able to keep the recipients' attention. Produces written material, which is clear, fluent, precise and easy to understand for the recipients. Attentive to the needs of others when he/she speaks. Stakeholder management Proactive mindset and navigates smoothly across multiple organizational layers of stakeholders. Is able to establish and maintain relations with people at all levels – internally and externally – and makes people feel at ease. Flexible/Adaptive Ability and willingness to quickly adjust to new situations in a continuously developing environment. Navigates smoothly when faced with late/last-minute changes in customer demands, requirements etc. Finds a way forward, reallocates resources, sets new priorities, and ensures alignment across the organization. Quality-minded Is concerned with reaching and maintaining quality; sets high standards for performance and execution – for oneself and for others. Intercultural understanding Able to communicate with people from other cultures, relates to problems as seen from other cultures' perspective. Take control/action-oriented Makes sure employees/co-workers have a clear understanding of the direction of the tasks; takes action; organizes resources and direct others toward successful execution of the tasks; drives projects forward to reach pivotal objectives; makes things happen and follows through. Structure Works systematically; organizes work and effort; is methodical in one's performance. Judgement Makes rational, realistic and sound decisions based on the involvement of available facts and possibilities. Initiative Is proactive and able to inspire others and is able to take own initiatives without hesitation. Actively seeks influence and starts up actions on own initiative. Independent Is independent, self-assured, has a realistic belief in own abilities to take suitable measures in the execution of tasks, expects success regarding own initiatives, able to maintain momentum in case of adversity. For further information: For further information: Please contact Sebastian Brabrand, Research Associate, Albright Partners A/S at:
https://www.albrightglobal.com/executive-positions/LEAD-AUDITOR%2C-GCP
Question: Do you like to be alone or surrounded by other people? Answer: I like to be alone. But I have a goal that is very important to me, and for its sake, I leave my room and come to people. As a result, it turns out that throughout a year, I fly around the globe. Question: What is the use of being alone? Answer: Alone I can develop individually: read books and articles. I absorb the sources, work on the material, and analyze the phenomena personally, so that no one interferes. But beyond that, I have to go outside to communicate with other people because there is a goal that obliges me. The more I give them and help them, the more I can absorb from them. It is written: “I learned from all my students.” In fact, to the extent that I want to convey my knowledge and all the good to them, through the same channel, I get their impressions that teach me. It turns out that mutual communication begins between us: I teach my students and I learn from them; that is, I become a student. This approach is possible with respect to any area of human life. The time has come when we must communicate with each other in such form; this is called the era of the Messiah, the last generation. Question: What is the correct relationship between being alone and interconnected with other people? Answer: Everyone should feel the right relationship individually, as the saying goes: “The soul of man will teach him.” He will feel that he is obliged to go outside, spread his knowledge, help others, and also receive from them. A person realizes that he needs a new kind of connection. Question: Where will mankind of the 21st century go: to solitude or to be together? Answer: In the 21st century, there will be both. Above all individuals there must be another layer that covers everything, where we are together. That is, there will be complete harmony: loneliness for the sake of unity and unification for the sake of individuality, one for the sake of together and together for the sake of one. Between these concepts there will be no contradiction, only mutual support. Now we see that these concepts are in contradiction—a man secludes himself and does not want communication. That is, the one principle still collides with the principle of together. But when we get to the critical state in this matter, humanity will start looking for a way out, a method for uniting people, which nature requires of us. Otherwise, nature is out of balance and we await climatic catastrophes and wars. People will be unable to build families and live together. The face of the generation will become very cruel because it will be incapable of uniting the two principles together: individuality and unity. And then, people will want to learn the methods of unification that Kabbalah offers. Question: How can a person check what is at the point of equilibrium? Answer: Balance is a harmony between these two areas in which a person will feel how nature is revealed before him. He will open his eyes to a new dimension. Inside the combination of these two layers: individuality and complete unification, a new world is revealed to him where all opposites unite and replenish each other. Question: Will a man of the future be an individualist? Answer: The whole individuality of the future man will be designed to support one man, that is, a common system united as one man. The individuality of each will be expressed in that it will include everyone. Question: That is, he will not care about personal benefits, but about the benefits of society? Answer: The personal benefits of each will be inseparable from the benefits of society. We will all feel like one man, as the supreme power that unfolds within all of us, filling us with mutual bestowal and love and revealing the higher reality to us. I wish for us to achieve this kind of life as quickly as possible!
https://laitman.com/2017/07/alone-in-a-crowd-of-a-billion-part-7/
Across the globe, we are experiencing the unprecedented. The world we knew before COVID-19 has been permanently upended. Our lives are forever split in two: before coronavirus and after. In these extraordinary times, philanthropy must respond with extraordinary measures. The Indonesian organizations on the frontlines that champion essential workers and build the systems that will be central to an equitable recovery are at risk. Civil society is the central pillar of Indonesia’s democracy and is at the forefront of the work to reduce the inequality that threatens the future of the country. Though being impacted themselves, our civil society organization (CSO) partners continue to work tirelessly to serve the community they work with such as disability communities, laborer, farmers and indigenous people, women and children. Konsorsium Pembaruan Agraria (Consortium for Agrarian Reform) for example, has initiated a solidarity-based economic action that connects small-scale food producers (farmers, indigenous communities, local farmer organizations, unions) with priority consumers. Current priority consumers include laborers, urban poor, fishermen and informal workers who were affected by COVID-19. By connecting producers directly to priority consumers, the high-cost food distribution is reduced, thus, enabling vulnerable communities to get what they need. Aside from farmers and laborers, people with disabilities in Indonesia are also among the most impacted population groups and they are not in the same boat when it comes to accessing relief and social protection assistance programs. Based on the latest survey, in 2015 there were 21.84 million people with disabilities in Indonesia, equivalent to 8.56 percent of the Indonesian population. The absence of a unified national database has limited disabled people’s access to the US$49.63 billion COVID-19 emergency fund provided by the government. The National Coalition for the Implementation of the Disability Law, of which our grantee the Indonesian Association of Women with Disabilities (IAWD) is part of, has decided to enhance their advocacy on the need for national disability data. IAWD will help develop a data collection system that will involve the national and district/city governments, village administrations as well as disability organizations and communities in identifying and registering people with disabilities into the population and civil registration system, which would provide them with a citizen ID number. In pursuing this work, IAWD is working together with the National Development Planning Agency (Bappenas), the Social Affairs Ministry and the Home Ministry. This collaboration will be a big step in realizing the integration of disability data at the local and national levels to ensure more effective targeting of social and economic protection programs and the exercise of civil rights for people with disabilities. Another CSO partner, Kemitraan, is working with the Cooperatives and Small and Medium Enterprises Ministry to develop a guide on how to safely run a business, while Himpunan Serikat Perempuan Indonesia (Hapsari) is working to educate the public on COVID-19 related gender-based violence against women, children and persons with disabilities. As we can see, civil society organizations in Indonesia are significant contributors in responding to social and economic inequality and the negative impacts of the pandemic. However, because of COVID-19, civil society faces an overwhelming need paired with an existential threat. Many organizations do not have cash reserves and already, furloughs and layoffs are hitting the sector — hard. The world’s vital nonprofits are in jeopardy. If we do nothing, the economic toll, alone, will be devastating. Losing civil society organizations would hurt Indonesia’s economy as well as its social and political fabric. Our challenge is not to save any particular organization; it is to save the soul of civil society itself. The Ford Foundation recognizes that this once-in-a-century crisis — and the overwhelming need to emerge from it with a more just and equitable society — requires a once-in-a-century response. For the first time in our foundation’s history, we have authorized $1 billion — financed through the sale of social justice bonds — to help stabilize and strengthen civil society in the 10 regions where we work around the globe. These funds are in addition to our annual grantmaking budget and will be dispersed over the critical years of 2020 and 2021. Together with the government and the private sector, we must assess the damage caused by inequality laid bare and exacerbated by the pandemic. And as we fund these vital recovery efforts, we must reimagine our systems, our economy and our culture for the better. We have heard from civil society organizations about their concerns around the sustainability of future funding. And in response, we are exploring how to use the newly released funds to create an Endowment Fund for Civil Society strengthening in Indonesia. The goal would be to increase the institutional capacity and financial resiliency of the organizations that are vulnerable due to the impact of COVID-19. It is our hope that the fund will support the long-term growth of civil society organizations by helping to strengthen institutional governance and build financial independence. The fund will support efforts to increase and diversify funding sources from donor agencies, develop revenue from services to third parties, create social enterprise business units and conduct public fundraising. For our vision to be successful we need the partnership of multiple actors to help create and support the fund and empower Indonesian civil society. We are inviting our partner philanthropic institutions, bilateral and multilateral donors, the private sector as well as the government to cocreate this critical infrastructure to develop the capacity of civil society organizations. Only together will we recover from the pandemic and establish a resilient and resourceful Indonesia. We recognize the need to act with fierce urgency to support our nonprofit partners as they support the individuals and communities hit hardest by the impacts of this pandemic. *** Darren Walker is the president of the Ford Foundation and Alexander Irwan is the regional director of the Ford Foundation in Jakarta. Disclaimer: The opinions expressed in this article are those of the author and do not reflect the official stance of The Jakarta Post.
https://www.thejakartapost.com/academia/2020/07/23/civil-society-endowment-fund-a-call-for-collaboration.html
UN chief António Guterres called for windfall taxes on oil and gas this week, arguing it is “immoral” for fossil fuel companies to reap record profits while ordinary people suffer from a cost of living squeeze. In recent weeks, oil and gas companies have reported bumper profits. BP reported profits of $8.45bn between April and June this year – more than triple the amount it made at the same time last year. Exxon Mobil, Chevron, Shell and Total reaped $51bn between them and returned $23bn to shareholders in dividends and buybacks, according to Reuters. “This grotesque greed is punishing the poorest and most vulnerable people, while destroying our only common home,” Guterres said during a media briefing on Wednesday. “I urge all governments to tax these excessive profits, and use the funds to support the most vulnerable people through these difficult times.” The IMF agreed that governments should shield the most vulnerable from price hikes but discouraged broader consumer subsidies. Living costs for European households will rise by 7% on average in 2022, the IMF projects, with the poor hardest hit. Several European governments have used price controls, tax cuts and subsidies to ease the impact of inflation – in some cases funded by windfall taxes. In a blog post, Oya Celasun, assistant director of the IMF’s European department, wrote that policymakers “should allow the full increase in fuel costs to pass to end-users” to encourage energy savings and moving away from fossil fuels. “Governments cannot prevent the loss in real national income arising from the terms-of-trade shock,” said Celasun. She added that governments should provide targeted relief for the most vulnerable groups, for example in the form of income support. Fully offsetting the cost of living increase for the bottom 20% of households would cost governments 0.4% of GDP on average for the whole of 2022. It would cost 0.9% of GDP to fully compensate the bottom 40% of households, the IMF calculates. “The IMF and UN are both clearly conscious of the need to protect the most vulnerable consumers but they disagree on who should bear the costs of doing so,” Olena Borodyna, a transition risk analyst at ODI, told Climate Home News. “Fundamentally, the two have a different position on how to encourage low-carbon transition – the IMF prefers market solutions and wants to incentivise consumers towards energy efficiency. The UN, on the other hand, is siding with the position of climate activists and politicians who are making a moral case for taxing fossil fuel companies amid the cost of living crisis,” Borodyna said. “The UN and IMF stances on the energy crisis are refreshingly compatible. We need both: windfall taxes on oil and gas profits; and a shift to policies that subsidise people, not energy,” Chris Beaton, the lead for sustainable energy consumption at the International Institute for Sustainable Development (IISD), told Climate Home News. “Energy subsidies are notoriously awful as a way to provide social assistance, disproportionately benefitting the people who buy most oil and gas (typically higher income groups) and locking in wasteful carbon-intensive consumption,” said Beaton. More effective and fairer policies for tackling energy poverty include swapping subsidy spending into public services such as health and education or providing low-income households with cash transfers, he said. “This empowers people to put assistance into whatever part of their budget they find most useful, and can work as a short-term crisis measure.” “The IMF might make a good economic point, and governments should not rush to give out blanket support measures to wealthy businesses and consumers that do not need them, but they are completely tone deaf in terms of implications for consumers experiencing the impact of the energy crisis,” Ipek Gençsü, senior research fellow at ODI, told Climate Home News. “The problem is that it’s not just the low-income that are currently suffering, it is a much bigger segment of the population. So supporting consumers is easier said than done in the current crisis,” she said.
https://www.climatechangenews.com/2022/08/04/un-imf-disagree-on-who-should-foot-the-bill-of-the-energy-crisis/?utm_campaign=Daily%20Briefing&utm_content=20220805&utm_medium=email&utm_source=Revue%20newsletter
DIR: Sarah Gavron • WRI: Abi Morgan • PRO: Alison Owen, Faye Ward • DOP: Eduard Grau • ED: Barney Pilling • MUS: Alexandre Desplat • DES: Alice Normington • CAST: Carey Mulligan, Helena Bonham Carter, Meryl Streep, Anne-Marie Duff, Brendan Gleeson The distinct lack of films depicting the Suffragette movement in cinema since the silent era is unsurprising. Despite a host of documentaries and television movies exploring one of the most pivotal events in women’s history, cinema has predominantly shied away from the subject, possibly under the (mis) conception that suffrage is now irrelevant and contemporary audiences are better placed aligning their sympathies with more pertinent, identifiable social struggles. While most of the silent era films have been lost, those that survive delineate a collective portrait of aggressive, defeminized termagants, whose abandonment of traditional gender roles created havoc within existing social structures, allowing cinema to engage in negative propaganda and persistent stereotypes. Sarah Gavron’s ambitious interpretation on British women’s suffrage follows its foot soldiers highly-charged campaign for social change in London, circa 2012. Penned by The Iron Lady writer, Abi Morgan, Suffragette, originally entitled The Fury, makes no apologies for its categorical feminist perspective, honouring the forgotten working-class women who fought to secure the right to vote and stand in political elections. Carey Mulligan stars as working-class washerwoman, Maud Watts, who is persuaded to join the movement, despite disapproval from her loving husband and lascivious boss. Under the encouragement of local pharmacist and seasoned activist, Edith (Helena Bonham Carter), downtrodden cockney, Violet (Anne-Marie Duff) and the watchful leadership of Emmeline Pankhurst (Meryl Streep) Maud finds herself engaged in a flurry of violent, illegal activity to increase media publicity for the cause. Soon her defiant activism compromises her family and job and with the guileful police inspector Steed (Brendan Gleeson) determined to derail her efforts, Maud is forced to choose between her old, subordinated life or continue the bloody fight for emancipation. A compelling and propulsive no-holds-barred interpretation, Suffragette does not shy away from accentuating the extreme subversive tactics employed by the bastions of the women’s movement in the face of frenzied, brutal opposition. Delving into the psyche and spirit of the era through a bold cinematic vision, Gavron pumps a thumping rush of furious energy into the inflammable, character-driven narrative, which steamrolls along at a ferocious pace, creating a palpable, nervous edginess, which perfectly executes the pervading social unrest of the era. Captured through a highly subjective, restless feminist lens, with many of the action sequences shot in media res, the camera belligerently probes and taunts to heighten the claustrophobic milieu of a disordered society on the brink of immense social change. Determined to redress the balance of stereotype and negative connotations aligned with suffragette identity, Gavron welcomes a heady mix of heterogeneous characters that broadly traverse the social spectrum, ranging from impoverished skivvies to grand privileged dames, with specific emphasis on working-class women. Granting her leading ladies their own weighty biography, which stands in opposition to the commonly assumed portrait of masculine, subversive harridans or well-to-do socialites, Gavron succeeds in making visible and humanizing the unknown combatants who have been long forgotten or erased by history. Carey Mullingan, at the helm of the action, plays the reluctant activist with an understated but deeply intense emotional power, her face, persistently framed in confined close-ups, etched with invisible scars from years of oppression, abuse and interminable struggle. Although Maud’s dissatisfaction with her lot propels her to action rather than any informed political leanings, aligning her more with the affluent socialites of the time who turned to the cause to out of boredom rather than socio-political motivation, it is her transformation, from a politically ignorant subordinate to an enlightened, mettlesome mutineer that reinforces the film’s core message. Maud’s political education and her awareness to the failings of the law, align the movement’s insurgent tactics to its political ambitions, rooting a more tangible comprehension of its history for contemporary audiences. By merging the political with the personal through an accessible narrative, Gavron reaches the nucleus of its ideology, redressing the manipulation of suffrage identity and situating Maud and her cohorts as more representative of the collective rather than the unfeminine disputants in over-sized hats, so often assumed. While Maud’s characterisation succeeds in making visible diverse identities across the class divide, Gavron fails to delineate a balanced perspective on the movement in its entirety. Ethnic minorities, such as Indian women were particularly active in British suffrage and in light of the film’s overly feminist perspective, it loses some narrative weight by advocating an exclusively white agenda, which somewhat reinforces the stereotype she is fervently trying to avoid. Also noteworthy is the lack of attention to women that subscribed to an anti-suffrage ideology, largely on the basis of sexual difference but it is the director’s incendiary polemic on her male characters that is most questionable, which she appears to view with feminist revisionism rather than suffragist revisionism, two distinctly disparate political ideologies. The women in the film may be angry but Gavron is furious. While the inhumane treatment and sexual humiliation experienced by the suffragettes is represented with immense emotional power, Gavron explicitly indulges in masculine stereotypes, pejoratively promoting an anti-male perspective, her all too few sympathetic male characters withdrawing support once it impinges on domestic life. Male supporters who championed the movement are also disregarded, particularly those equally subjected to discriminatory laws by failing to meet specific property requirements. To Gavron, suffrage in Britain was an elite white, female club only. The strength of Suffragette lies in its compelling portrait of British working-class women, which roots the political to the personal through an engaging narrative, impressive production values and superb performances, allowing contemporary audiences to easily identify with a more coherent suffragette ideology, not previously seen in cinema. The promotion of an overly subjective, feminist narrative detracts, at times, from the perspicuous portrait of working-class women and it is a shame that Gavron’s over-magnification of Maud’s narrative does not locate it within a wider social context nor take into account the active participation of other social groups and political supporters. Despite such narrative oversights, Suffragette’s supreme message is unequivocal, quashing the notion that suffrage is irrelevant (a detailed list of the countries who have attained and still seeking suffrage accompanies the closing titles) and the fight for emancipation is far from over.
http://filmireland.net/2015/10/16/review-suffragette/
Silk-Derived 2D Porous Carbon Nanosheets with Atomically-Dispersed Fe-Nx -C Sites for Highly Efficient Oxygen Reaction Catalysts. Controlled synthesis of highly efficient, stable, and cost-effective oxygen reaction electrocatalysts with atomically-dispersed Me-Nx -C active sites through an effective strategy is highly desired for high-performance energy devices. Herein, based on regenerated silk fibroin dissolved in ferric chloride and zinc chloride aqueous solution, 2D porous carbon nanosheets with atomically-dispersed Fe-Nx -C active sites and very large specific surface area (≈2105 m2 g-1 ) are prepared through a simple thermal treatment process. Owing to the 2D porous structure with large surface area and atomic dispersion of Fe-Nx -C active sites, the as-prepared silk-derived carbon nanosheets show superior electrochemical activity toward the oxygen reduction reaction with a half-wave potential (E1/2 ) of 0.853 V, remarkable stability with only 11 mV loss in E1/2 after 30 000 cycles, as well as good catalytic activity toward the oxygen evolution reaction. This work provides a practical and effective approach for the synthesis of high-performance oxygen reaction catalysts towards advanced energy materials.
BIOGRAPHY - Page 4 Shortly after Laynia's funeral, a new Darkstar appeared among the Winter Guard, wearing her classic costume and wielding the Darkforce. The Russian government saw value in the public reputation established by the heroic Darkstar and devised a means of infusing Laynia's DNA with genetically compatible candidates to create viable replacements. This process imbued the subject with a facsimile of Laynia's Darkforce powers which they were able to wield in conjunction with her original amulet. The amulet strengthened the replacement's connection with the Darkforce aspect of Laynia's lifeforce, which apparently did not perish when Laynia was killed by Fantomex. The first candidate to take up the mantle of Darkstar was Sasha Roerich. Sasha found herself tormented by memories belonging to Laynia. When Sasha died in battle trying to fully harness the Darkforce, she was quickly replaced by Reena Stancioff, who was christened as the third Darkstar by the Russians. Like her predecessor, Reena was given Laynia's power amulet which houses remnants of the original Darkstar's psyche. [Hulk #1, Hulk: Winter Guard] Laynia made a brief return to the land of the living - sort of - when the X-Men's foe Selene used her power to resurrect many mutants and infuse them with a version of the techno-organic virus during her quest to become a true goddess. Darkstar was among Selene’s army of the living dead and part of the attack force sent to Utopia. There, she encountered Iceman, who was forced to battle the woman he once cared for. Selene‘s plot was foiled and while some of the resurrected mutants managed to escape returning to death once again, Laynia was not one of them. [“Necrosha”] Reena Stanicoff was doing a commendable job as the third Darkstar, despite her own insecurities. But when Fantasma mysteriously returned to Earth from Limbo, Vanguard and the surviving members of the Protectorate followed her. It was discovered that Fantasma was actually a Dire Wraith. The Citadel, headquarters of the Winter Guard, was a former Dire Wraith stronghold and deep in its catacombs were Dire Wraith eggs, which Fantasma intended to hatch. To complicate matters, the Presence was being kept captive at the Citadel and thanks to Fantasma escaped his cell. Although Vanguard despised Reena, the Protectorate and Winter Guard were forced to unite to battle the Dire Wraith hatchlings and the Presence. Tragically, Reena was killed by a Dire Wraith, who then took over her body to enhance its power.
https://uncannyxmen.net/node/5756/page/0/4
Related News The U.S. Supreme Court will rule next year whether some states including Georgia still need federal oversight of how they conduct elections. The case concerns the Voting Rights Act, which protects minority voters by monitoring election map changes in some states. he Republican National Convention in Tampa begins Monday and will bring together some of the party’s biggest political heavy Political experts say Georgia is a prominent state in the delegation ‘stack-up’ but not as important as this year’s swing states. Attorneys general from Georgia and Alabama have applauded a 2-1 ruling by the U.S. Court of Appeals for the District of Columbia that overturned a regulation clamping down on power plant pollution that contributes to unhealthy air crossing state lines. A Georgia city is challenging the state's new sunshine laws in response to an open meetings lawsuit state Attorney General Sam Olens filed against its mayor. The Fulton County Daily Report reports that Olens' lawsuit is the first under Georgia's new Open Meetings and Open Records acts. The lawsuit states that Cumming Mayor Ford Gravitt and police barred Nydia Tisdale from videotaping an April 17 city council meeting. The Georgia attorney general has asked the state's highest court to expedite an appeal in the case of a death row inmate whose execution was delayed this week. The Georgia Supreme Court on Monday granted a stay of execution to Warren Lee Hill so it could consider an appeal by the inmate over Georgia's recent switch from a three-drug combination to a single-drug execution method. A landmark Supreme Court decision regarding the constitutionality of President Obama’s health care plan could be handed down any day now. Attorney General Sam Olens met with healthcare policy advisors to discuss the impact on Georgia. The state is accusing Cumming’s mayor of violating the Open Meetings and Records Act by preventing a resident from filming a city council meeting. In a civil complaint filed this week, Georgia attorney general Sam Olens says Mayor H. Ford Gravitt told resident Nydia Tisdale she couldn’t record an April council meeting. He then asked a policeman to forcibly remove her. Georgia's attorney general has filed a lawsuit against a suburban Atlanta city and its mayor for alleged violations of the state's open meetings law. The law permits visual and sound recording during open meetings, but the suit claims Cumming’s mayor ordered a woman to stop taping a city council meeting in April. Volunteers who work with children in Georgia will be required by law to report suspected child abuse starting next month. The new mandatory reporting requirement is part of a criminal justice reform law that Gov. Nathan Deal signed last month.
Definitions: - Constructed Response Items - Items that typically require students to construct a written response to a question or prompt. - Formative Assessments - Ongoing checks on student understanding to inform instruction and provide feedback to students. - History Labs - Classroom investigations of an overarching or essential question about an event, person, or concept in history. Students use multiple primary and secondary sources to develop their interpretations. Evidence-based interpretations are presented to support conclusions and respond to the overarching question. - Performance Assessment Tasks - The application of student understanding in the context of a performing real or authentic task. - Selected Response Items - Items in which students select from a range of possible answers; examples include multiple choice, true/false, weighted multiple choice, true/false, multiple true/false, matching, and fill in the blank (with word bank). - Summative Assessments - Cumulative instructional evaluations of overall student achievement to measure mastery. - Traditional Multiple-Choice Items - Multiple-choice items that consist of a stem (typically a question or incomplete statement) and four possible responses, only one of which is correct. - Weighted Multiple-Choice Items - Multiple-choice items in which there are several possible choices that are weighted according to accuracy and defensibility.
https://www2.umbc.edu/che/arch/assessment.php
. Quiz iOS Android More She [Mrs. Bennet] was a woman of mean understanding, little information, and uncertain temper. (Ch. 1). Characterization uA description of Mrs. Bennet’s personality. She was an unintelligent woman and didn’t want to know anything. She was also easily upset. She is tolerable; but not handsome enough to tempt me; I am in no humour at present to give consequence to young ladies who are slighted by other men. You had better return to your partner and enjoy her smiles, for you are wasting your time with me." (Mr Darcy to Mr. Bingley about Elizabeth Bennet; Ch. 3) Irony-Mr. Darcy is really attracted to Elizabeth Bennet. It is ironic that he points out all her faults and ends up attracted to her. "Vanity and pride are different things, though the words are often used synonymously. A person may be proud without being vain. Pride relates more to our opinion of ourselves, vanity to what we would have others think of us." (Mary; Ch. 5) Satire and Foreshadowing-Vanity and pride are the basic premises of the book. Pride is going to cause the characters in the book to do many stupid things. If a woman is partial to a man, and does not endeavour to conceal it, he must find it out. (Elizabeth, about Bingley Ch. 6) Foreshadowing-Elizabeth is foreshadowing that Jane will lose Bingley if she does not let him know her feelings. A lady's imagination is very rapid; it jumps from admiration to love, from love to matrimony, in a moment. (Darcy to Miss Bingley, Ch. 6) Characterization-Darcy is giving insight into his character by the way he speaks of women in general. A lady's imagination is very rapid; it jumps from admiration to love, from love to matrimony, in a moment. (Darcy to Miss Bingley, Ch. 6) Characterization-Darcy is giving insight into his character by the way he speaks of women in general. Nothing is more deceitful," said Darcy, "than the appearance of humility. It is often only carelessness of opinion, and sometimes an indirect boast." (Ch. 10) Characterization-Darcy is defining one of the themes of the book. Pride through false humility and explaining how he really feels about people. Indirectly, he is defining his own character. ,"Ihave made no such pretension. I have faults enough, but they are not, I hope,of understanding. My temper I dare not vouch for. It is, I believe, too littleyielding—certainly too little for the convenience of the world Conflict-The quote illustrates how Elizabeth and Darcy really felt about each other. "I had not thought Mr. Darcy so bad as this—though I have never liked him. I had not thought so very ill of him. I had supposed him to be despising his fellow-creatures in general, but did not suspect him of descending to such malicious revenge, such injustice, such inhumanity as this." (Ch. 16) Characterization and foreshadowing-explains how people view Mr. Darcy and the rumors spread by Mr. Wickham. It also gives a clue as to the suspicion that Mr. Wickham may not be like he portrays himself. "It is your turn to say something now, Mr. Darcy. I talked about the dance, and you ought to make some kind of remark on the size of the room, or the number of couples." (Ch. 18) Conflict-The underlying tone of the conversation betrays the constant bickering between Elizabeth and Darcy. "Mr. Wickham is blessed with such happy manners as may ensure his making friends—whether he may be equally capable of retaining them, is less certain." (Ch. 18) Characterization-Mr. Bennet is describing sarcastically the personality of Mr. Wickham. Reading between the lines that fact that Mr. Bennet does not approve of Mr. Wickham. "I remember hearing you once say, Mr. Darcy, that you hardly ever forgave, that your resentment once created was unappeasable. You are very cautious, I suppose, as to its being created." (Ch. 18) Foreshadowing- It gives a clue into how the couple are going to have to forgive each other and let pride take a back seat. Thereare few people whom I really love, and still fewer of whom I think well. Themore I see of the world, the more am I dissatisfied with it; Mood/Tone- It gives insight to how Jane Austin feels about he world. "And is this all?" cried Elizabeth. "I expected at least that the pigs were got into the garden, and here is nothing but Lady Catherine and her daughter..." (Ch. 28) Satire-Gives us a glimpse of how silly Mr. Collins is about position and Catherine DeBurgh. "I like her appearance," said Elizabeth, struck with other ideas. "She looks sickly and cross. Yes, she will do for him very well. She will make him a very proper wife." (Ch. 28) Imagery and Satire-Elizabeth describes with humor Catherine DeBurgh’s daughter and how well-suited she is for Mr. Darcy because his personality is as ill-suited as his betrothed health. Thisis the estimation in which you hold me! I thank you for explaining it so fully.My faults, according to this calculation, are heavy indeed! Conflict- The source of Elizabeth and Mr. Darcy’s conflict is made evident by this conversation-Pride and Prejudice "You are mistaken, Mr. Darcy, if you suppose that the mode of your declaration affected me in any other way, than as it spared the concern which I might have felt in refusing you, had you behaved in a more gentlemanlike manner.“ Conflict-As the conversation progresses, the conflict reaches it peak when each cuts each other in the harshest way. "You have said quite enough, madam. I perfectly comprehend your feelings, and have now only to be ashamed of what my own have been. Forgive me for having taken up so much of your time, and accept my best wishes for your health and happiness." (Ch. 34) Foreshadowing-Darcy is saying this to Elizabeth after her refusal of his marriage proposal. Little does he know that her refusing him makes them well suited for one another. Their health and happiness is enter-twined. Her astonishment, as she reflected on what had passed, was increased by every review of it. That she should receive an offer of marriage from Mr. Darcy! That he should have been in love with her for so many months! So much in love as to wish to marry her in spite of all the objections which had made him prevent his friend's marrying her sister, and which must appear at least with equal force in his own case—was almost incredible Irony-Darcy discourages his friend to marry Jane, but Darcy wants to marry Elizabeth. "Be not alarmed, madam, on receiving this letter, by the apprehension of its containing any repetition of those sentiments or renewal of those offers which were last night so disgusting to you. Mood/Tone-Darcy displays his hurt by the way he writes the letter. Nobody can tell what I suffer! — But it is always so. Those who do not complain are never pitied." (Mrs Bennet, Ch. 20) Characterization-Mrs. Bennet turns the focus on herself without thinking about the turmoil of the daughter. "When I consider," she added, in a yet more agitated voice, "that I might have prevented it! — I who knew what he was. Had I but explained some part of it only — some part of what I learnt — to my own family! Had his character been known, this could not have happened. But it is all, all too late now." - (Elizabeth) Chapter 46 Hyperbole-Elizabeth’s reaction to Lydia’s elopment "But how little of permanent happiness could belong to a couple who were only brought together because their passions were stronger than their virtue, she could easily conjecture." - Chapter 50 Foreshadowing-the prediction that Lydia and Mr. Wickham will not be happy in their marriage. "You are too generous to trifle with me. If your feelings are still what they were last April, tell me so at once. My affections and wishes are unchanged, but one word from you will silence me on this subject for ever." (Ch. 58) proposal. Climax-The resolution of the plot is foreshadowed while we wait for Elizabeth’s answer to Darcy’s It was painful, exceedingly painful, to know that they were under obligations to a person who could never receive a return. (Ch. 52). Theme-Pride is once again defeated by an obligation You have widely mistaken my character, if you think I can be worked on by such persuasions as these. How far your nephew might approve of your interference in his affairs, I cannot tell; but you have certainly no right to concern yourself in mine. I must beg, therefore, to be importuned no farther on the subject." (Ch. 56). Characterization-It gives a clue as to how prideful and domineering Catherine Deburgh is and how it does not intimidate Elizabeth. "We will not quarrel for the greater share of blame annexed to that evening," said Elizabeth. "The conduct of neither, if strictly examined, will be irreproachable; but since then, we have both, I hope, improved in civility.“ Plot resolution – Elizabeth admits her part in the conflict with Darcy. "When I wrote that letter," replied Darcy, "I believed myself perfectly calm and cool, but I am since convinced that it was written in a dreadful bitterness of spirit.“ Plot-Darcy admits his part in the conflict with Elizabeth. As a child I was taught what was right, but I was not taught to correct my temper. I was given good principles, but left to follow them in pride and conceit. Plot Resolution-Darcy explains his mistakes and offers an apology. "It has been coming on so gradually, that I hardly know when it began. But I believe I must date it from my first seeing his beautiful grounds at Pemberley." Satire-Elizabeth to Jane-making fun of her change in opinion of Mr. Darcy.
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=262646
Liv Chun’s Design. Liv Chun is a designer. She works for the theater, studios and private clients as costume and stage designer. She draws and paints in her spare time. Her drawings are mainly composed of portraits, plants, animals and all kinds of pattern. Besides traditional materials such as pencils and watercolors, she likes to mix different materials to give form to her art, such as gold leaves, ink, texture pigments, technical pens, gouache and so on… The expressions of the female figures are ethereal, but the details within the scenes are abundant. She fulfills the moral of her arts with the unspecific transformation of certain patterns, inviting people into deeper thoughts. Due to her background in stage design, she likes to show space’s flexibility in stage design through her arts, creating several dimensions within one simple page. Even though some of her works are black and white, the light, tones and shades speak for themselves! Moreover, her works contain surrealist and profound meanings. Her liberal use of traditional oriental tools such as ink, Chinese painting, calligraphy, Japanese Enso and old Kimono patterns… make her art works carry a fantastic power dotted with oriental charm. Her works do not try to reconstruct reality; it is a study of the world, human nature, emotions, sensual experiences and moods. Her inspiration comes from dreams, natural sciences, human history, surrealism and her everyday life in Rennes, France. Different inspirations bring different themes into her works. Yet, the theatre remains the mainspring of her arts, the most important source of inspiration and spiritual experiences, as both a viewer and a designer, the first directly influencing the topics of her works, making her works tell a story.
https://www.artpeoplegallery.com/liv-chuns-design/
Interculturality can be seen as ability to interact with people from different cultural backgrounds using authentic language appropriately in a way that demonstrates knowledge and understanding of the cultures. It is also the capacity to experience the culture of another person and to be open minded, interested and curious about that person and culture. To state a more clear approach, we can understand the word “culture” as referred to the products, practices and perspectives of a target group of people or target culture. On the other hand, “interculturality” would be the interaction of people from different cultures and the understanding of another culture, so that the language used is appropriate to the context and audience. In this sense, being competent in interculturality depends not only in having cultural knowledge but also on attitudes, beliefs, values and interpersonal skills. For instance, one person can know the language and not know how to interact with a specific audience, and therefore not being understood. In the modern world interculturality is becoming increasingly important as many people migrate from one country to another to escape from conflict, persecution, poverty or to reunite families. Interculturality is also a useful resource in order to face problematic situations related with stereotypes, prejudice and discrimination, which can lead to episodes of racism and xenophobia. There are several social and personal benefits promoted by interculturality: - Promotion of cultural diversity and cultural adaptation - Facilitate approximation process among cultures - Enhance cultural enrichment and creativity - Facilitate social and labour integration of migrants - Provide strategies to manage stereotypes, prejudices and discrimination Regarding the purpose of this handbook, can be said that there is a skills gap on interculturality and intercultural competencies in the management of Work-Based Learning. And at the same time, the participation of young migrants and ethnic minorities in WBL are underrepresented. To overcome this negative situation, WBL professionals need to be equipped with the adequate skills and tools to support young migrants and ethnic minorities. The kind of skills and tools that are directly linked with interculturality. That is, VET and in-company teachers, trainers and mentors need to acquire the adequate competencies and tools to manage cultural diversity and promote migrants participation in WBL, their allocation to training places, and prevent possible situations of discrimination. The proper incorporation of interculturality in education should comprehend the four pillars of education established by UNESCO: learning to know, learning to do, learning to live together, and learning to be. Intercultural awareness Intercultural awareness is defined as the ability to examine one’s own culture and other cultures involved. Intercultural awareness can help to have a better understanding and interchange with people from other cultures. In this sense, developing intercultural awareness is about learning to recognize and deal with the differences between cultures in perceiving the world. That is, comprehend the meaning and influence of culture and cultural identity on an intrapersonal level, develop awareness about cultural differences and links, and build a critical attitude towards intolerance. Becoming interculturally aware may help VET and in-company teachers, trainers and mentors to: - Develop and understand themselves and their own cultural background - Understand that other people have different points of view to their own - Respect other people´s beliefs, values and expression of their culture - Understand the meaning and influence of culture and cultural identity - Understand the meaning and difference of the stereotype, prejudice and discriminating concepts and identify strategies for their management Can be said that intercultural awareness is the most relevant competence regarding interculturality. From certain point of view, it is the basic competence required for the proper development of the other intercultural competences. Cultural and ethnic minorities As mentioned before, VET and in-company professionals should develop intercultural awareness as a key competence for managing interculturality in WBL. This competence will help them to comprehend the meaning and influence of cultural identity, be aware about cultural differences and have a critical attitude towards any form of discrimination. Must be highlighted at this point that cultural and ethnic minorities are going to be probably the main target groups when managing interculturality in WBL, at least form the project perspective. Cultural and ethnic minorities constitute the most vulnerable groups in the framework of LINK-INC project, since they face more difficult situations that hampers their participation in WBL programmes: - strong difficulties for social and cultural adaptation, including intolerance and xenophobia - lower opportunities in employment and education systems, and weak personal and professional networks - specific motivation and needs, not necessarily the same that bigger groups There are different definitions of what constitutes a cultural/ethnic group, but a number of features can be attributed to these groups: a proper collective name, shared myths of origin and cultural characteristics, such as language, religion, traditions and customs that distinguish a given group from others. Besides, what makes an ethnic group a minority is a numerically and politically non-dominant position in a state of which they are citizens. The majority of European countries have ethnic minority population. These minorities can be nationals (Albanians in Kosovo), transnationals (Gypsies), indigenous (Scottish, Corsicans) or migrants (Maghreb migrants in France or Turkish in Germany) Stereotypes, prejudice and discrimination A clear explanation of these concepts can be offered by social psychology and the so called “ABC principle”: Affect, Behaviour and Cognition. In this case, applied to interculturality, as seen in figure 4, where are showed how this three stages influence and feedback each other. Figure 4. ABC principle on interculturality. Source: Social Psychology Principles. 2012 The Cognitive component in our perceptions of other culture members is the Stereotype. Stereotype can be defined as the positive or negative beliefs that we hold about the characteristics of social groups or cultures. For instance, we may think that “Muslims are violent”, “Frenchs are romantic”, “old people are boring”. And we may use those beliefs to guide our actions toward people from those groups or cultural backgrounds. In addition to stereotypes, we may also develop Prejudice which is and Affective component. Prejudice can be seen as an unjustifiable negative attitude toward a group or the members of a culture. Prejudice can take the form of disliking, anger, fear, disgust, discomfort, and even hatred. That is, the kind of affective states that can lead to develop external behaviours. In this sense, stereotypes and prejudices are problematic because they may led us to create a type of Behaviour that we call Discrimination. That is, an unjustified negative behaviour toward members of a group or a culture based on their group/culture membership. Intercultural awareness competence should include the ability to identify stereotypes, prejudices and discriminating behaviours, as well as the capacity to avoid these negative concepts by means of managing and apply the appropriate strategies. In this sense, WBL professionals should be aware of these concepts and be able to: - Analyse and value the impact of stereotypes, prejudices and discriminatory behaviours in oneself. - Identify strategies for the management of stereotypes, prejudices and discriminatory behaviours. - Put into practice strategies to facilitate the approximation process with other cultures.
http://link-inc.eu/online-center/handbook/unit-2-theoretical-context-interculturality/
Background: Cognitive decline significantly contributes to disability in older individuals. We previously demonstrated cross-sectionally that arterial stiffness [pulse wave velocity (PWV)] was associated with memory loss independently of traditional cardiovascular risk factors and of neuroimaging findings in older individuals without prior stroke. The present study aimed to evaluate PWV as a predictor of longitudinal changes in cognitive function in older individuals reporting memory problems. Participants and methods: We studied 102 older individuals (mean age 79 +/- 6 years; 31 men, 71 women) reporting memory problems. PWV was measured noninvasively by Complior. Traditional cardiovascular risk factor levels were measured. Global cognitive function was measured by the Mini-Mental State Examination (MMSE) (maximum score = 30) at baseline and at follow-up visit. Cerebral computed tomography evaluated the presence of microvascular damage or cortical atrophy. Individuals with prior stroke or atrial fibrillation were excluded. Results: The baseline MMSE was 22.9 +/- 5.5; 61% were hypertensive, 26.8% diabetic, 9.4% smokers, 10.5% taking statins, and 21.1% taking nitrates. The average PWV was 13.5 +/- 2.2 m/s. After a median follow-up of 12 months, the average per-year decline in MMSE was 2.9 points or 12.1%. Multiple regression models showed that PWV independently predicted cognitive decline (model R2 = 0.50). PWV was the single strongest predictor of cognitive decline, explaining 15.2% of the total variance (each 1 m/s increase in PWV was associated with an average 0.74 per-year decrease in MMSE score, P < 0.001). Conclusion: In older individuals, arterial stiffness (PWV) is a strong predictor of loss in cognitive function, independent of age, sex, education, and traditional cardiovascular risk factors.
https://pubmed.ncbi.nlm.nih.gov/17414668/?dopt=Abstract
Vista Rehabilitation is a thesis project that explores the possibilities and affects that design, and its relation to nature, has on healing and the overall health of the inhabitants of a structure. In times of sickness and injury, time is spent with professionals in environments that are meant to repair our health in a timely, painless, sensible manner. These environments can be designed to promote a healthy mind and body through a strong connection between the built environment and its natural surroundings. There are many factors that are involved in our body's biological recovery time, and many of these biological responses are induced through our emotions, which can be controlled in some degree by the spaces that surround us. The spaces we inhabit and their exposure to natural elements, such as sunlight and fresh air, contribute to a patient's recovery process. The typology chosen to explore this topic will be a physical rehabilitation center. The location chosen for this building will be Bemidji, Minnesota. Successful completion of this project will give a better understanding of what patients need inorder to recover from physical injury of surgery in the most efficient, effective manner possible. Health care facilities are often times associated as being sterile, artificial environments. By designing spaces that are mentally stimulating, accomplished in part, through their connections with nature, a more holistic level of well-being can be maintained by the patients.
https://library.ndsu.edu/ir/handle/10365/22973
The Ancient DNA Lab provides support for researchers working with ancient, historical, forensic, or other sensitive (low DNA quantity/quality) genetic samples. Biosafety Level 2 (BSL-2) laboratory BSL-2 is suitable for work involving agents that pose moderate hazards to personnel and the environment such as Escherichia coli and MRSA. Laboratory personnel have specific training in handling pathogenic agents, access is restricted and all procedures are conducted in BSCs or other physical containment equipment. Biosafety Level 3 (BSL-3) high-containment laboratory BSL-3 is applicable to research facilities where work is performed with biological agents that have the potential to pose a severe threat to public health and safety. Laboratory personnel receive specific training in handling potentially lethal agents. All procedures must be conducted within BSCs. The CDC regulates and inspects BSL-3 laboratories that conduct research on Select Agents. Caporaso Bioinformatics Lab The Caporaso Bioinformatics Lab is an applied bioinformatics lab focused on microbiome research, software development, and bioinformatics education. Genetic Analysis Instrument Lab Real-Time PCR, developed in 1992, is used to exponentially amplify a single copy of a segment of DNA. The Real-Time instrument allows for the detection of the anticipated DNA target region during the early phases of the reaction. PCR Amplification Lab Traditional PCR was developed in 1983. It’s used to exponentially amplify a single copy of a segment of DNA, then once the reaction is complete uses agarose gel electrophoresis to check whether the reaction successfully generated the anticipated DNA target region. TGen North Pathogen and Microbiome Division TGen North focuses on diagnostic, analytic, forensic, ecologic and epidemiologic research of microbes important to medicine, public health and biodefense.
https://in.nau.edu/pmi/pathogen-microbiome-institute/facilities/
The Iron Age is the fourth age available to a player in the game of DomiNations. It is obtained by paying 16,000 gold in the Town Center. This age is succeeded by the Classical Age and is preceded by the Bronze Age. This is the age where players can chose specific nations to have a specific unique unit and nation powers. Historical Description The Iron Age is a period of time of history of major advancements and the use of iron and steel for tools. This age lasted from 1,200-100 BC(E) to 200 BC(E)-800 AD/CE. Before, iron was not understood that well and humans thought the properties compared to bronze was weaker. However in this age, iron had been understood as stronger and had replaced the use of bronze (Bronze Age) in tools for agriculture and combat. Iron is a transition metal with the symbol Fe for Ferrum (Latin) having 26 as the atomic number. Since iron is more ductile and malleable than bronze, iron is used in tools and weapons. Iron is smelted in special-designed furnaces and hot-worked and unlike bronze and tin, is more difficult to smelt. The Iron Age is also a period of a changing society as well. Major and complex religions were set up, such as Greek Polytheism and ones who lived through today (Space Age) such as Judaism, Hinduism, Chinese folk religion etc. Language also becomes complex as well creating the first types of literature in the world such as Sanskrit (India). The chapters of the Old Testament of the Holy Bible and the famed Indian Vedas were written in the era. Empires grew with the Persian Empire taking over Mesopotamia. In Southern Europe, the cultures changed dramatically, mainly for the Greeks and Romans. The Greeks had set up massive cities and set up trade routes and had their own little nations. However, conquest was also their goal for the Romans and the Greeks. Such era is now known as the Classical Age. Advancements After advancing to the Iron Age, players are given a new set of buildings to build, technologies to research, and units to create. Buildings that are available in the Iron Age is the caltrops, the catapult, the garrison, another tower, a set of walls, another set of caravans and farms, another set of roads, and a temple. In the blacksmith, you can upgrade the soldier to the hoplite (vandal if Germans, bushi if Japanese, and legion if Romans), the bowman to the composite bowman (longbowman if British, chu ko nu if Chinese), the horse raider to heavy horse raider, and you can receive the horseman (chevalier if French, companion if Greeks).
https://dominations.fandom.com/wiki/Iron_Age
Remote auditing is a necessity now, but even after COVID-19 eases, most accounting firms expect their engagement teams to do a significant part of their work remotely. Managing the client and the audit team, supervising the work and keeping the engagement on track is challenging in the best of times. With the client and team members isolated from each other and working from home, those challenges grow considerably. Even the best run firms are reporting hits to realization of 10% or more caused by the remote work environment. Over time, individual managers and supervisors will learn what works and what doesn’t and adjust their management and supervision techniques to be more effective. The goal of this webcast is to accelerate that learning process, to enable your managers and supervisors to more quickly become adept at managing their teams remotely. Original, illustrative video examples drive this learning event and help participants connect the lessons to application in the real world. Highlights Challenges of working remotely. Best practices for remote supervision of individuals. Remote team management. Prerequisites None. Designed For Partners, managers and supervisors working in a remote auditing environment. Objectives Identify and plan for the challenges individuals face working from home. Apply best practices for supervising and providing guidance to individual team members working remotely. Identify and plan for the challenges of managing teams and client personnel working remotely. Apply best practices for managing and guiding remote teams to successfully complete a project. Preparation None. Notice None. Leader(s):
https://app.wscpa.org/cpe/036598mb:remote-auditing-management-and-supervision-webinar
I am working in Badru LLC. My current job is Marine Cargo Inspector. It helps me earn enough money for my life. I have a lot of hobbies such as: reading, watching movies, listening music etc. When i have free time, i love to watch movies. My favorite movies are The Good Guy,Percy Jackson & the Olympians: The Lightning Thief, etc . I listen music everyday even when working. My favorite music genres are Country, Pop and my favorite songs are Over-Drake,TelephoneLady Gaga featuring Beyoncé, etc. Fake Name: Priscilla Nazziwa Address:22695 Pink Extensions Apt. 233 Mubende, CO 6 More Address Options » Latitude & longitude: 61.94968, 10.978218 Phone: 0759220148 Social Security Number: 231-76-6820 Date of Birth: 05/25/1978 42 years old Height: 70.866 inches 1.8 meters Weight: 124.08 lbs 56.4 kgs Gender:Female Hair Color:Gray Eye Color:Brown Ethnicity:Asian Blood Type:AB Financial & Banking Numbers Exp Date: 11/20 CVV: 775 Paypal:[email protected] Account Blance: 1437.67 Total Consumption: 6285.87 Preferred Payment:Credit card Bank Account Number: 895357 IBAN: UGANDA65159422856342492928823384 Internet Details Username: robyn.barton Password: T",~O| Email Address: [email protected] Unique User Identifier (UUID): 2021a404-8e73-30e5-b06f-5f0acdda1f61 IP Address (IPv4): 232.57.236.124 IP Address (Local): 10.60.246.70 MAC Address: 2A:C7:83:A7:E9:A3 IP Address (IPv6): 7ef2:d4be:7fa5:e6ab:783d:7c93:6bd4:d6d5 User Agent: Mozilla/5.0 (compatible; MSIE 11.0; Windows 95; Trident/3.1) Fake Company & Employee Fake Company Name: Badru LLC Company Email: [email protected] Company Size: fewer than 10 employees Salary: USD 61,572.00 per year USD 29.32 per hour Employee Title: Marine Cargo Inspector More informations Family Members:4 Members Civil Status:Married Favorite Color:silver Favorite Movies:The Good Guy,Percy Jackson & the Olympians: The Lightning Thief, Favorite Genres:Country, Pop Favorite Songs:Over-Drake,TelephoneLady Gaga featuring Beyoncé, Favorite Books:
https://www.ultiname.com/view-profiles/5f266bc4cb9320.95067841
A common strategy schools deploy to help students be successful is to create and maintain cross-functional teams of educators focused on ensuring the success of, often, a targeted subset of students. These teams are called a variety of things but they are often known as student support teams. Teachers, student services staff, administrators, parents, and sometimes professionals from the community come together to discuss academic and social-emotional issues that, most often, are advanced through a referral process. This convening provides a positive way to look at a student holistically and to ensure both a comprehensive and collaborative approach to student progress. As Helen Keller said, “Alone we can do so little; together we can do so much.” The agenda for these meetings tends to look the same in many schools and districts across the country. First, the data related to the concern are presented and discussed. Data may reflect a host of issues, whether academic, behavioral, social or emotional. Members of the team provide insight on the topic. An action plan to help the student is created and one of the members is charged with operationalizing the support process. In a couple of weeks, the team reconvenes to review the action plan and the student’s progress. Often the resultant “success plan” produced for a given student is predicated on an important assumption. When the robust team of cross-functional educators devises a plan, the team assumes or, at least, hopes that the student will engage in the solution presented. Success is what we want for all students; however, there are times when there is misalignment between the student’s underlying need and the planned intervention, so the likelihood of an improved performance is often compromised from the start. Frustrated, but likely unsurprised, the success team members reconvene to establish new interventions for the student, unsure if they will work. Perhaps there is something else in this process that teams need to consider? What if we revisited the concept of “team meeting” and restructured the approach to supporting student success? What if the notion of a student support team were to be inclusive of the student in need? Let me suggest a change towards a student-centered approach to building an effective support team. As educators, we have a lot of data on student performance and achievement. Look beyond the raw data around a student’s grades, test scores and attendance and try to identify meaningful patterns and personal realities that a student may be experiencing. These realities may include a focus on meaningful personal connections, career ambitions, or recent significant life events. By understanding these facets, we can reveal a more complete picture of the student and, consequently, a broader set of tools for responding to their needs. Consider all of these factors and others you may add to the list: - Attendance – Is the student frequently missing school and are there patterns of absenteeism that need further explanation, e.g is the student always missing just one certain class or missing at a certain time of day? - Grades — Is the student struggling with the same content area year after year? When was the student last successful in that specific content area? - Connections — Is the student known by adults in the school and involved in clubs, athletics, fine arts activities and/or community service? Is the student employed or engaged with other organizations outside the school and does the student have significant family responsibilities? - Career — Does the student have an interest they would like to pursue after high school? Are teachers made aware of this on behalf of the students they teach? - Significant Life Events — Has the student recently experienced a significant personal change, a change at home or elsewhere outside of school? This information provides a more holistic view of the student and can better surface what may really be undermining achievement. The gathering of information needs to continue by engaging the student in order to do the following: - Review the holistic picture of the student’s situation, including any data that reflects prior years’ academic or social-emotional challenges. Ask the student to share previous strategies that supported them effectively. - For the current year, target one area in which the student feels they could benefit from extra support. - Encourage the student to help shape a personalized support team to help tackle the identified challenge. Even if you can’t assemble every member of the student’s “dream team,” it is helpful to know who, they feel, could be effective in supporting them. - Collaboratively, structure a joint action plan and target follow-up date. To avoid the aforementioned misalignment between student needs and planned interventions, it is important to include the student in the process of data-collection, context-setting and dialogue. Whether the student is engaged before the student success team meeting or is actually invited to the meeting, the plan is likely to be more effective if the student is a participant in the process, e.g. if the student is literally a member of the student success team. To optimize the impact of your student success team, be sure you take time to know the student’s story — and the most impactful players within that story — so that your plan will meet them where they are. Including the student as an essential member of this framework and expanding our notion of what inputs are relevant to a student’s achievement unleashes greater potential for success.
https://intellispark.com/blog/building-a-student-centered-team/
What are the Affects of Warming Up and Cooling Down? Extracts from this document... Introduction What are the Affects of Warming Up and Cooling Down? Warm Up A warm-up routine is essential to raise the body temperature by using all the major muscle groups thereby increasing blood flow and elasticity of muscle tissue and allowing more oxygen to be carried to the working muscles. This will prepare the body for the activity to follow. This will also improve performance and reduce the risk of injury. The warm up is a technique designed: �To prepare the body for competition or conditioning exercise. �To reduce the possibility of muscle injury or soreness The warm up should include exercises that prepare the muscles to be used and activate the energy system required. The warm up should also be related specifically to the activity that follows. For instance, sit-ups or push-ups are not useful as a warm up for running in a football game. Instead, jogging or run a through are the best preparation. Warming up produces beneficial physiological changes: �There is an increase in the blood flow through the muscle as the small blood vessels dilate, and therefore an increase in the local temperature of the muscle and the oxygen supply. ...read more. Middle to the strong contracting muscles (such as the Quadriceps). Cold antagonistic muscles relax slowly and incompletely when the agonists contract, therefore retarding free movement and accurate co-ordination. At the same time, the force of the contraction of the agonists and the momentum of the moving part exert a great strain on the unyielding antagonists. Without a warm up, this may lead to the tearing of the muscle fibres or the tendons. Cool Down Just as one gradually increases the amount of work prior to strenuous exercise it also makes sense to gradually decrease the amount of work following sports training. Following intense activity blood has been diverted to working muscles and has a tendency to "pool" in the extremities, especially the legs. Light rhythmical activity involving the muscle groups will aid the blood to return to the heart and prevent pooling and consequent dizziness and nausea. Stretching activities during the cool-down will also prevent muscle soreness following exercise. Gradually bringing the body back to normal also helps psychological wind-down and promotes mental relaxation at the end of the exercise session, allowing time to consider the feeling of satisfaction and benefit that exercise can bring. ...read more. Conclusion Alternately contracting and relaxing leg muscles pumps extra blood through your body. When you stop suddenly after exercising vigorously, your leg muscles stop pumping and your heart has to pick up the extra work. To make your heart beat faster and stronger, your body increases production of its own natural stimulants called adrenalin and nor adrenaline. This can cause the heart to beat irregularly, depriving your brain of adequate oxygen, so you feel dizzy and can even pass out. People with heart disease can develop irregular heartbeats. Cooling down does not prevent muscle soreness. It increases circulation and helps to clear lactic acid from your muscles at a faster rate, but muscle soreness after exercise has nothing to do with lactic acid accumulation. It is due to muscle damage caused by exercise. So, you cool down to prevent dizziness, not muscle soreness. Duration of Cool Downs It takes your body approximately 3 minutes to realize it does not need to pump all the additional blood to your muscles. A safe cool down period is at least 3 minutes, preferably 4-5 minutes. All cool downs should be followed by stretching of the muscles to avoid soreness and tightness. ...read more. This student written piece of work is one of many that can be found in our AS and A Level Anatomy & Physiology section. Found what you're looking for?
https://www.markedbyteachers.com/as-and-a-level/physical-education-sport-and-coaching/what-are-the-affects-of-warming-up-and-cooling-down.html
Free Search (2487 videos) Artemis - Novel Telecommunications Details Title Artemis - Novel Telecommunications Length 00:07:18 Language English Footage Type Documentary Copyright ESA Description In the summer of 2001, ESA's next-generation telecommunications satellite, Artemis (Advanced Relay and Technology Satellite) will be launched on Ariane 5. The ESA TV Service has produced a series of three pre-event Exchanged Programmes with background footage on this satellite. The second programme provides information on what makes Artemis an advanced telecommunications satellite. Artemis will the first satellite of its kind to combine 3 different payloads: Mobile communications Satellite Navigation Direct satellite to satellite communications This programme focuses on how, by using these different payloads, Artemis will play a major role in natural disaster management and also as an in orbit data relay system between the International Space Station and Europe. SCRIPT In Europe, a new generation of satellites has been built, satellites that can do more than others before them. Artemis - in Greek mythology the daughter of Zeus, the supreme ruler on Olympus - is destined to carry out one of the European Space Agency ESA's most important missions, when it will be used to test a range of different types of new systems. There is a mobile telephony payload on board which will enable people to make calls even when and where mobile networks on Earth are not working. Also, Artemis will relay data from other satellites to the ground when these have no radio contact with a ground station. Such data-relays make it possible to receive data at all times. Previously, communication with satellites in low Earth orbit was slowed down by the fact that ground stations have line of sight and radio contact with them for brief periods only, after which they have to wait for the satellite to reappear on the horizon. Sound bite Lo GalboPutting three different payloads together in