Saturday, 1 October 2011

SECRETS OF AGING


Corbis, Owaki/KullaOnly 40 years ago it was widely believed that if you lived long enough, you would eventually experience serious cognitive decline, particularly with respect to memory. The implication was that achieving an advanced age was effectively equivalent to becoming senile—a word that implies mental defects or illness. As a graduate student back then, I was curious why such conclusions were being drawn about the elderly. I had only to look as far as my own great-grandmother and great-aunt to begin questioning the generalization. They lived to 102 and 93, respectively, and were exceptionally active and quick-witted enough to keep us twentysomethings on our toes. A closer look at the literature didn’t give me any confidence that either the biological basis of memory or how it might change with age was well understood. Many discoveries made in the years since have given us better tools to study memory storage, resulting in a major shift away from the view of “aging as a disease” and towards the view of “aging as a risk factor” for neurological diseases. So why do some people age gracefully, exhibiting relatively minor—and at worst annoying—cognitive changes, while others manifest significant and disabling memory decline? Answers to these questions are fundamental for understanding both how to prevent disease and how to promote quality of life.
A test for normal aging
Cognitive decline at older ages is marked by gradual loss in the ability to retain new information or to recall the past. Although many other changes occur during the aging process—impairment in range of motion, speed of gait, acuity of vision and hearing—none of these factors are as central to one’s personal identity as are one’s cumulative experiences. For me, it seemed that understanding memory, and how it changes with age or disease, was the key to understanding the aging brain.

When I was in graduate school, in 1973, two papers came out that would change the shape of neuroscience and memory research. Terje Lømo, Tim Bliss, and Tony Gardner-Medwin devised a method that could experimentally alter synaptic strength. Earlier theoretical ideas had implied that strengthening the connection between neurons might explain how memories were formed: the stronger the connection, the better a message could be relayed or recalled. As electrophysiologists, Lømo and colleagues tested this idea by sending electrical currents through a group of neurons in the rabbit hippocampus in patterns that mimicked normal electrical activity, and measuring how the connections between the neurons changed. By stimulating neurons, they could create a durable increase in synaptic strength, which was later dubbed long-term potentiation, or LTP. But more than just an intriguing physiological observation, the research presented a fully controllable experimental proxy for learning that made it possible to study memory across the lifespan of an animal for the first time.
Thus, with the discovery of LTP, and the idea that animal models of human memory could be developed, I sought to finish my dissertation research in a lab that was routinely conducting LTP experiments. At the time, there were only three labs in the world that were doing these types of experiments: Graham Goddard’s laboratory at Dalhousie University, Tim Bliss’s at University College London, and Gary Lynch’s at UC, Irvine. As it turned out, my thesis advisor Peter Fried, at Carleton University in Ottawa, had been one of Goddard’s students, and introduced me to him by saying, “I don’t really want you to steal my student, but she has this idea that she thinks you might be interested in.” I explained to Goddard that I wanted to track an animal’s ability to make and keep spatial memories as it aged. Since the hippocampus—the area of the brain where LTP was first discovered—was also involved in spatial memory, I wanted to use Goddard’s setup and surgically implant electrodes that could measure LTP in awake, freely behaving rats. In an act of faith and generosity, Goddard soon purchased animals for me and began to “age” them in anticipation of my arrival the following year.
The behavioral tests used to study spatial memory at this time all used somewhat severe methods to motivate animals: either with shock—a significant stress—or by restricting eating or drinking, both of which were potentially detrimental to survival. Because the precious aged animals could not be replaced without waiting two years to grow another “old” batch, I developed a novel, milder task that is now often referred to as the Barnes maze. With rodents’ natural aversion to open, well-lit areas in mind, I designed a circular platform with many openings around its circumference, only one of which would allow the animal to escape into a dark box beneath the platform’s surface.
Aging is not equivalent to the aberrant process of Alzheimer’s disease; it is in fact a distinctly different neurological process.
At 2–3 years old (an age equivalent to about 70-80 human years), the rats were old enough to begin the behavior and electrophysiological experiments in the latter part of 1975. The animals were permanently implanted with electrodes that could both stimulate the neurons and record their activity. The electrodes allowed us to measure baseline synaptic transmission with single stimuli, then to induce LTP, and finally to monitor its decay over several weeks. We found that LTP decayed about twice as fast in the old rats as it did in the young, with the “synaptic memory” lasting 20 days rather than 40. Most importantly, the durability of LTP was correlated with the memory of the correct hole on the circular platform task. In fact, by combining behavior and electrophysiology techniques, we produced the first demonstration that instead of acting as a mere proxy for learning, LTP might actually represent the cellular mechanism by which all memories are created.1
Just one year before I published my work on aging rats, Bruce McNaughton, then at Dalhousie University, demonstrated that for synaptic strengthening to occur, synapses from several neurons needed to converge and be coactive on the target neuron. The finding made perfect sense, since learning often comes from the association of two or more pieces of information, each of which could be carried by an individual neuron. Later experiments also demonstrated that under some conditions, LTP can be more difficult to induce in aged rats, and conversely, that a reduction in synaptic strength—called long-term depression or LTD—is easier to induce in the hippocampus of old rats.2 This may mean that LTD participates in “erasing” LTP, or reducing its longevity, and thus possibly increases forgetting.
By the mid-1980s it was clear that even in normal aging there are subtle changes in the biological processes that underlie memory, and in the relationship between the strength of memory and the strength of the synapses. But the real message of these experiments was that old animals can and do learn. None of the older animals exhibited what could be considered behaviorally to be “dementia,” nor were their biological mechanisms that allow memory traces to be laid down completely dysfunctional.

Aging networks

While there is much to be learned about the physiology of neural systems by direct artificial stimulation, even the mildest currents do not exactly mimic the selective and distributed activity of cells in normal behavioral states. To more realistically understand the aging brain, it is necessary to monitor cell circuits driven by natural behaviors. Around the time that LTP was discovered, John O’Keefe at University College London discovered “place cells” in the hippocampus—a major breakthrough for the field. By simply placing small microwires very close to hippocampal cells, without stimulating them, John and his student Jonathan Dostrovski noted that individual hippocampal cells discharge action potentials only when a rat is at select physical locations as it traverses a given environment.3 The unique property of these place cells is that their “receptive fields,” or the features that make each of these cells fire, are related to specific regions of space.

Infographic: Lost in Space
View full size JPG | PDFTAMI TOLPA (MAZE); CAROL BARNES (COGNITIVE MAPS, SOURCE: NATURE 388/ 17 JULY 1997)
By recording from many individual place cells at once, using a multipronged electrode (the tetrode) developed by McNaughton, and determining which physical locations activated different cells in the brain, it was possible to visualize how the hippocampus constructs a “cognitive map” of the surroundings. If young rats were given one maze-running session in the morning, and another session in the afternoon, the same hippocampal map was retrieved at both time points—suggesting that the pattern of neuronal firing represented the memory of their walking through the rectangular, figure-eight maze. What surprised us was that about 30 percent of the time, when old rats went back for their second run of the day, a different place-cell distribution would appear—as if the older animals were retrieving the wrong hippocampal map.4 Certainly a rat navigating a familiar space with the wrong map is likely to appear lost, or as though it had forgotten its way. But why was the rat retrieving the wrong map?
The possible answer to this question was published in 1998 by Cliff Kentros, now at the University of Oregon. In young rats, Cliff pharmacologically blocked the NMDA receptor, a critical gateway regulating LTP and synaptic strengthening. When he and colleagues used a dose large enough to block the formation of LTP, the cognitive maps of the young rats began to degrade over time in much the same way as observed in aged rats, suggesting that faulty LTP mechanisms may be responsible for map instability in aging.5

Connecting the hippocampal dots

Even though the new multiple tetrode recording method was a large advance over the limitations of recording one or two neurons at a time, these methods were still constrained to sample sizes of ~100 to 200 cells.
One recent advance has allowed us to monitor cell activity on a broader scale, not with electrodes, but by tracking the expression of genes that are rapidly expressed during behavior. Monitoring the expression of one such gene, Arc, during behavioral tests, for example, John Guzowski in my lab developed a method that could report on activity over hundreds of thousands of cells across the brain (the ‘catFISH’ method).6 This large-scale imaging of single cells has been critical for identifying which cells, within which memory circuits, are altered during normal aging.
This technique also allowed us to tease apart which cells might be more susceptible to decline with age. Unlike electrodes that record LTP, which cannot differentiate between different types of neurons, the catFISH method allows us to distinguish between the three primary cell groups within the hippocampus—the granule cells of the dentate gyrus and the pyramidal cells of regions CA1 and CA3. We were able to show that cell-specific gene expression (and therefore cell activity) did not change with age in CA1 and CA3 cell regions, but that the number of granule cells engaged during behavior showed a continuous decline in aging rats, suggesting these cells are a weak link in the aging rat hippocampus.7 Could this also be true in primates?
Scott Small at Columbia University Medical Center helped me answer this question in young and old monkeys. Although MRI methods do not have single-cell resolution, they can provide an indirect gauge of activity over large segments of the brain. Using a variation of standard MRI that measures regional cerebral blood volume (CBV), we monitored the resting brain activity of monkeys ranging in age from 9 to 27 years (equivalent to 27 to 87 human years), and could then relate this brain activation to an individual animal’s performance in memory tests. There were no differences in resting metabolic measures in the areas of the brain that send most of the signals into or out of the hippocampus, nor were there differences in brain activity within CA1 or CA3. But the old monkeys did show reduced activity in the dentate gyrus, similar to what we had found in aging rats. Critically, we observed that the lower the activity in the dentate gyrus, the poorer the memory.7
Scott had observed a similar pattern in an earlier human aging study. However, with human studies there is always a concern that some of the people we assume are aging normally in fact have the undiagnosed beginnings of human-specific neurological disease, such as Alzheimer’s. Confirming the observation in aging animals, which do not spontaneously get this disease, provides strong evidence that the dentate gyrus is a major player in the changes that occur in normally aging mammals.
Furthermore, these data refute the contention that aging is effectively equivalent to an aberrant process like Alzheimer’s disease, revealing that it is in fact a distinctly different neurological process. In contrast to the results showing that aging primarily affects the dentate gyrus, the granule cells of this brain region in Alzheimer’s patients do not show changes (with respect to age-matched controls) until very late in the illness. Instead, it is the cells in CA1 and in the entorhinal cortex that are dramatically affected. Thus, while aging and Alzheimer’s may in some cases be “superimposed,” there is not a simple linear trajectory that leads us all to end up with the disease.

Memory genes


Infographic: The Seat of Memory
View full size JPG | PDFTAMI TOLPA
Presently, the omics revolution is leading us closer to an understanding of individual differences in aging and cognition. One example with respect to memory is a study that used a genome-wide association approach in 351 young adults to identify the single nucleotide polymorphisms (SNPs) most strongly related to memory. The most significant SNP identified was a simple nucleotide base change in the normal cytosine-thymine sequence at a specific location in the KIBRA gene, which encodes a neural protein that had been suspected of playing a role in memory. Those who had the thymine-thymine SNP had the best memories.8 This was further confirmed in an independent elderly population,9supporting the view that this particular SNP in the KIBRA gene predisposed people of any age to have “better memory.” This could be another clue to help guide our understanding of the molecular cascades that support good memory, as the activity of the KIBRA protein may influence mechanisms related to LTP. Could there be a link between KIBRA and age-related memory decline? One way to test this hypothesis was to find a way to manipulate part of the molecular circuit in which KIBRA is involved. Performing just such an experiment in collaboration with my lab, Matt Huentelman at the Translational Genomics Research Institute in Arizona identified a promising compound, hydroxyfasudil, a metabolite of a drug already approved for use in treatment of chest pain, or angina. As predicted, hydroxyfasudil improved spatial learning and working memory in aged rats.10The clinical applications of the drug’s effects on learning and memory are currently being explored.
David Sweatt at the University of Alabama at Birmingham also recently documented DNA methylation changes in response to learning, and was very interested in collaborating with us to determine whether changes in this epigenetic process during aging might explain our data on Arc expression, which is related to LTP formation and learning. Methylation of DNA typically downregulates transcription. After young and old rats performed exploratory behaviors that engaged the hippocampus, the methylation state of the Arc gene in hippocampal cells was, in fact, reduced in both age groups (which allowed more of the Arc mRNA to be transcribed). The difference between young and old animals was that certain cells of older animals, such as hippocampal granule cells, had higher overall methylation levels in the resting state, which resulted in production of less Arc protein during behavior, and the diminished ability to alter synapses via a mechanism such as LTP.11 There are many complex facets of these data, but the results have led us and others to hypothesize that there may well be a number of epigenetic marks that accumulate during aging, and that epigenetic dysregulation may be a fundamental contributor to normal age-related cognitive decline. While the details of exactly how aging affects epigenetic modifications remain to be elucidated, it is at least a reasonable guess that understanding these mechanisms will be critical for future experimental designs aimed at optimizing cognition across the life span. Luckily, these processes are also amenable to pharmacological manipulation.

Aging in the future

Looking back on the rather grim expectations concerning memory and the elderly that were held only a few decades ago, the vision today is very different and much more positive. There are many who live to very old ages with minimal cognitive decline—and certainly no dementia. One particularly interesting study in this regard followed individuals who were 100 years of age (centenarians) at the beginning of the study until the time of their death, monitoring cognitive function and other factors in the “oldest old.” Interestingly, 73 percent of these centenarians were dementia free at the end of their lives (the oldest reaching an age of 111 years).12 Watching the remarkable discoveries in biology over the past half century, one cannot help but look with excitement towards the next groundbreaking findings that are surely in the making. The future holds great promise for the once remote dream of understanding the core biological processes required for optimal cognitive health during aging—and progress in this regard should also provide the needed backdrop for understanding and preventing the complex neurological diseases that can be superimposed on the aging brain.

Q&A: Overhul the funding system



It’s a perennial complaint among scientists: grant writing takes up too much time. “I think that there’s pretty much wide agreement that scientists have to spend a lot of time to write proposals, to review proposals, to write progress reports, final reports, and do lots of things that are not necessarily contributing to science,” said John Ioannidis, a professor at Stanford University’s School of Medicine.
Flickr, Thomas HawkBut it isn’t just time that’s spent unwisely, but billions and billions of dollars that could be allocated in smarter ways, Ioannidis wrote in a comment in today’s NatureThe Scientist spoke with Ioannidis about his ideas to fix science funding in the United States.
The Scientist: What are the current problems with the way science is funded in America?
John Ioannidis:I think that the way that the funding system works, people have to promise something that is exaggerated. They have to compete against others who are promising something that would be spectacular, and therefore they need to make a huge claim that often they may not be able to deliver, just because of the odds of science.
Or they have to promise something that would be highly predictable. It’s something that they may have already done or that they know what the answer is going to be…So in a sense I think the current system is almost destroying the potential for innovative ideas and in many cases it even fosters mediocrity.
TS: What’s the best way to improve upon the current funding paradigm?
JI: What I advocated in that comment was we need to have some studies to directly test, ideally in a randomized fashion, whether one strategy performs better than another. One could think about pilot projects where you have consenting scientists who say I’m willing to be randomized to funding scheme A versus funding scheme B. A could be lottery allocation, for example, and B could be merit-based. If we run these types of studies, in a few years we can see what these scientists have done in the short term. Maybe in the longer term we can also see whether their research really made a difference. I’m in favor of experimenting, because we’re spending billions and billions of dollars and we’re doing that with no good evidence really.

John Ioannidis, Stanford University School of MedicineJOHN IOANNIDIS
TS: In your proposal for funding scientists according to merit, you mention independent indices to measure a proposal’s worth. What are some examples?
JI: I think that publications and citations should not be undervalued. They could play a very important role in that assessment. But maybe combining indices and paying attention to quality aspects other than quantity could be informative. One could think of merging indices that exclude self-citation or take into account the impact of papers rather than the amount of papers and adjust for co-authorship. There are indices to do that, and I think they can be objective if you combine them.
On top of this we have the opportunity to build additional information into the profile of an investigator based on scientific citizenship practices. Things like sharing of data or protocols, and how often these data and protocols are utilized by other researchers, how much do they contribute to other aspects of improving science.
I think it’s an opportunity: if we really feel that some practices are essential for science, and we want to improve these practices, then tying them to the funding mechanism is really the prime way to make them happen.
TS: In this kind of scheme is seems like weight might be given to more established scientists. What are ways to help younger faculty who don’t have such a track record?
JI: I would think that this is actually a problem of the current system, rather than any proposal to create something new. The average age for an investigator to get his first RO1 currently in the states is about 40 or 41 years old. People are going through 15 years of being active researchers, and they still don’t have independence. So I think that a system that is based on merit could really get younger scientists on board much faster than the current system.
Moreover, for example, if you use citation indices, you can always adjust for the number of years that someone has been active. And there’s some evidence that the track record of some individuals could be identified from fairly early in their careers.
I would also think that for young investigators it’s probably okay to move closer to the possibility of an egalitarian sharing, so give some opportunity to lots of people early on for a few years and see what they can do. They will build a track record within a few years, and then you can start having some impact measures or citizenship practice measures.
TS: How do we balance the interests of funding sure-bet science or translational research versus very risky or basic science?
JI: This is not an easy question. I think people have a different perception of what would be the appropriate allocation between these two strategies. It has to be a common consensus in the scientific community of what we really want to do. Currently I think innovative research is undervalued and not given enough opportunities to succe.


Missegregation linked to DNA Damage


Tumor formation may require fewer steps to get started than previously thought, according to a new study that shows how chromosome instability (CIN) and DNA damage—two tumorigenesis triggers typically considered independent phenomena—can arise from a single defect in how chromosomes segregate during cell division.
“This paper really provides a link between the mechanism behind CIN and the mechanism underlying chromosome damage,” said Dartmouth University biochemist Duane Compton, who was not involved in the research. Prior to this study, most researchers were not investigating how these two phenomena might be related, he added.
The results, published today (September 29) inScience, hint at new avenues for developing anti-cancer therapeutics that target chromosome missegregation as a central event in the development of both abnormal chromosome number and structural DNA damage as tumorogenesis precursors.
Lead author Aniek Janssen, graduate student at University Medical Center (UMC) Utrecht, in The Netherlands,  got the idea to study the relationship between CIN and DNA damage after her mentors, cell and molecular biologists Rene Medema and Geert Kops, noticed that most tumor cells exhibited both aneuploidy (abnormal chromosome number) and structural DNA damage. To divide normally, cells must ensure that their chromosomes properly segregate, with each daughter cell receiving a copy of the duplicated chromosomes. Aneuploidy results from missegregation of chromosomes—when one daughter cell receives two copies of a specific chromosome, and the other receives none. If a missing chromosome contained a gene encoding a protein important for DNA repair, chromosome loss might lead indirectly to DNA damage, explained Kops. In later divisions, a cell missing this protein would not be able to repair any broken DNA, which tends to accumulate with more replication and cell division. But Kops and Medema noticed that DNA damage sometimes occurred within a single division after chromosome missegregation, before the daughter cells would need to start synthesizing new DNA repair proteins, leading them to hypothesize that missegregation was leading directly to damaged DNA.
To cause aneuploidy in dividing human retinal pigment epithelial cells, Janssen treated them with a chemical that causes spindle fibers to connect a single centromere to both poles of the cell, so that the fibers wouldn’t properly split the paired chromosomes. Within hours after division occurred, real-time videos tracking fluorescent staining for DNA damage showed a missegregation event correlated with higher levels of DNA damage.
“The strongest evidence for associating CIN with DNA damage is the live imaging,” said Compton. Janssen and her colleagues also showed that double-stranded DNA breaks occurred, suggesting that the chromosomes were breaking into smaller pieces.
Blocking cytokinesis, the physical division of cells where the membrane pinches in and daughter cells separate, reduced the occurrence of the DNA damage, suggesting that the damage occurs early in the process, Kops said. Though how this leads to DNA breaks is unclear, the damage resulted in structural aberrations including chromosomal translocation, a genetic abnormality often seen in cancer cells.
DNA damage is classically thought of as being the result of mechanisms other than missegregation, said Medema. A defect in DNA repair genes would lead to improper mending of breaks, leading to mutations that might result in cancer.
“But our study shows that cells can get translocations just by being chromosome unstable,” said Medema. The pieces of broken chromosomes could segregate into daughter cells, and be grafted onto a different chromosome via normal repair mechanisms—resulting in a translocation even with adequate DNA repair machinery.
The study’s results should prompt reevaluation of the fields understanding of how the chromosomal aberrations that underlie tumorigenesis occur, said Medema. He explained that the work, which was supported by Top Institute Pharma, and included partners at Merck Sharp & Dohme (MSD), the Utrecht University Medical Center, and The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, could inform strategies for designing new therapies. Enhancing genomic damage to induce cell death is one therapy currently being pursued, and this study suggests that it might be achievable by targeting the mechanism behind missegregation alone.

Longer Cosmic ruler based on Black Hole


Astronomers have a new gadget in their cosmic toolbox that is capable of measuring distances to very faraway objects. The method uses active galactic nuclei, the bright, violent regions at the centers of galaxies, to gauge distances farther than existing cosmic yardsticks can reach..
Having such an odometer is crucial for understanding how space, time and matter behave at the scale of the whole universe, and could help solve mysteries such as the nature of the dark energy that is accelerating the expansion of the universe.
For four decades, astronomers have been trying to turn these luminous beacons into cosmic mile markers.  Now, scientists at the University of Copenhagen’s Dark Cosmology Centre and their collaborators think they’ve got it worked out. The brightness of an active nucleus is tightly related to the radius of a region of hot gases surrounding the central black hole. When scientists determine that radius, they can predict how intrinsically bright the nucleus should be — and compare that value to how bright it appears, which depends on distance. Astronomers call objects whose properties can be used to predict their actual brightness standard candles.
“It’s the radius-luminosity relationship that allows us to assume that these active galactic nuclei are standard candles,” says postdoctoral fellow Kelly Denney of the Dark Cosmology Centre, an author of the study, which will appear in the Astrophysical Journal.
The stable of standard candles already houses type 1a supernovas and Cepheid variable stars, which have predictable luminosities but are good only for measuring distances to objects present when the universe was nearly 4 billion years old. Active galactic nuclei would extend that capability to objects at distances corresponding to when the universe was only 1.5 billion years old.
“Right now we rely so much on supernovas, it would be really nice to have independent verification of cosmological parameters,” says astrophysicist Bradley Peterson of Ohio State University.  “I’m really excited about this result.”
A technique called reverberation mapping measures how long it takes photons being kicked out of the black hole’s immediate neighborhood to reappear after they’ve traversed the hot, gassy maelstrom surrounding the black hole. Because light travels at a constant speed, astronomers can determine the gassy region’s radius. Then, the luminosity of the active galactic nucleus can be calculated.
Until now, tightening the relationship between radius and luminosity has been tricky. Among other reasons, starlight from a host galaxy contaminated the brightness measurements of its active nucleus. But the team had in hand data from astrophysicist Misty Bentz of Georgia State University that corrected for the effects of surrounding starlight, in addition to Denney’s own precise measurements of radii.
“I think this paper is really clever,” Bentz says. “But it needs a few improvements before it can be comparable to supernovae and stuff like that.”
Denney says the team is planning on calibrating the method using observations of additional galaxies and — if given access to the Hubble Space Telescope — measurements produced by Cepheid variable stars.
Peterson says the method will probably take a decade to catch on within the astrophysics community. It will take that long to expand the dataset, add more distance calibrations, and reduce some of the noise in the data.
But the team is optimistic that active galactic nuclei will become an accepted distance measure, since they are more numerous and more easily observable than existing standard candles.  “It could end up being one very large rung on the distance ladder,” Bentz says. “It would put everything on the same scale, instead of using one thing to calibrate something else that’s used to calibrate something else.… There’s less potential for something to go wrong.”

VENUS


Venus is the second planet from the Sun, orbiting it every 224.7 Earth days. The planet is named after Venus, the Roman goddess of love and beauty. After the Moon, it is the brightest natural object in the night sky, reaching an apparent magnitude of −4.6, bright enough to cast shadows. Because Venus is an inferior planet from Earth, it never appears to venture far from the Sun: its elongation reaches a maximum of 47.8°. Venus reaches its maximum brightness shortly before sunrise or shortly after sunset, for which reason it has been known as the Morning Star or Evening Star.
Venus Real
Venus is classified as a terrestrial planet and it is sometimes called Earth's "sister planet" due to the similar size, gravity, and bulk composition. Venus is covered with an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space invisible light. Venus has the densest atmosphere of all the terrestrial planets in the Solar System, consisting mostly of carbon dioxide. Venus has no carbon cycle to lock carbon back into rocks and surface features, nor does it seem to have any organic life to absorb it in biomass. Venus is believed to have previously possessed Earth-like oceans, but these evaporated as the temperature rose. Venus's surface is a dusty dry desertscape with many slab-like rocks, periodically refreshed by volcanism. The water has most likely dissociated, and, because of the lack of a planetary magnetic field, the hydrogen has been swept into interplanetary space by the solar wind. The atmospheric pressure at the planet's surface is 92 times that of the Earth.The Venusian surface was a subject of speculation until some of its secrets were revealed byplanetary science in the twentieth century. It was finally mapped in detail by Project Magellanin 1990–91. The ground shows evidence of extensive volcanism, and the sulfur in the atmosphere may indicate that there have been some recent eruptions. The absence of evidence of lava flow accompanying any of the visible caldera remains an enigma. The planet has few impact craters, demonstrating that the surface is relatively young, approximately 300–600 million years old. There is no evidence for plate tectonics, possibly because its crust is too strong to subduct without water to make it less viscous. Instead, Venus may lose its internal heat in periodic major resurfacing events.
PHYSICAL CHARACTERISTICS
Venus is one of the four solar terrestrial planets, meaning that, like the Earth, it is a rocky body. In size and mass, it is very similar to the Earth, and is often described as Earth's "sister" or "twin". The diameter of Venus is only 650 km less than the Earth's, and its mass is 81.5% of the Earth's. Conditions on the Venusian surface differ radically from those on Earth, due to its dense carbon dioxide atmosphere. The mass of the atmosphere of Venus is 96.5% carbon dioxide, with most of the remaining 3.5% being nitrogen.\
INTERNAL STRUCTURE
Without seismic data or knowledge of its moment of inertia, there is little direct information about the internal structure and geochemistry of Venus. The similarity in size and density between Venus and Earth suggests that they share a similar internal structure: a core,mantle, and crust. Like that of Earth, the Venusian core is at least partially liquid because the two planets have been cooling at about the same rate. The slightly smaller size of Venus suggests that pressures are significantly lower in its deep interior than Earth. The principal difference between the two planets is the lack of plate tectonics on Venus, likely due to the dry surface and mantle. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field.
GEOGRAPHY
Map of Venus, Showing the elevated "contains"
in yellow ishtar Terra at the top and Aphrodite Terra just Below
the equator to the right .
About 80% of the Venusian surface is covered by smooth volcanic plains, consisting of 70% plains with wrinkle ridges and 10% smooth or lobate plains. Two highland "continents" make up the rest of its surface area, one lying in the planet's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra, after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is 11 km above the Venusian average surface elevation. The southern continent is called Aphrodite Terra, after the Greek goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area. As well as the impact craters, mountains, and valleys commonly found on rocky planets, Venus has a number of unique surface features. Among these are flat-topped volcanic features called farra, which look somewhat like pancakes and range in size from 20–50 km across, and 100–1,000 m high; radial, star-like fracture systems called novae; features with both radial and concentric fractures resembling spider webs, known as arachnoids; andcoronae, circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin.
Most Venusian surface features are named after historical and mythological women.Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regionsAlpha Regio, Beta Regio and Ovda Regio. The former three features were named before the current system was adopted by the International Astronomical Union, the body that oversees planetary nomenclature.
The longitudes of physical features on Venus are expressed relative to its prime meridian. The original prime meridian passed through the radar-bright spot at the center of the oval feature Eve, located south of Alpha Regio. After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne.

Friday, 30 September 2011

VALCANOS

Most volcanos occur at the edge of tectonic plates, where magma can rise to the surface. A volcano’s shape depends mainly on the viscosity of the lava, the shape of the vent, the amount of ash, and the frequency and size of the eruptions. In fissure through a crack, forming lava plateaux or plains. In cone-shaped volcanoes, the more viscous the lava, the steeper the cone. Some cones fall in on themselves or are exploded outwards, forming calderas.

Cross-section through a stratovolcano (vertical scale is exaggerated):
1. Large magma chamber
2. Bedrock
3. Conduit (pipe)
4. Base
5. Sill
6. Dike
7. Layers of ash emitted by the volcano
8. Flank
9. Layers of lava emitted by the volcano
10. Throat
11. Parasitic cone
12. Lava flow
13. Vent
14. Crater
15. Ash cloud
Conical mount Funji in japan, at sunrise from
lake Kawaguchi (2005)
Mayon, near-perfect stratovolcano in the
Philipines
Pinatubo ash plume reaching a height of 19km, 3 days
before the climate eruption of 15 june  1991





VERTEBRA

Divisions of
spinal segments
There are three major types of vertebra in the human spine: limbar, thoracic, and cervical. Lumbar vertebrae support a major part of the body’s weight and so are cmparatively large and strong. Cervical vertebrae support the head and neck.


IN HUMANS
There are normally thirty-three (33) vertebrae in humans, including the five that are fused to form the sacrum (the others are separated by intervertebral discs) and the fourcoccygeal bones that form the tailbone. The upper three regions comprise the remaining 24, and are grouped under the names cervical (7 vertebrae), thoracic (12 vertebrae) andlumbar (5 vertebrae), according to the regions they occupy. This number is sometimes increased by an additional vertebra in one region, or it may be diminished in one region, the deficiency often being supplied by an additional vertebra in another. The number of cervical vertebrae is, however, very rarely increased or diminished.
Oblique view of cervical vertebrae
With the exception of the first and second cervical, the true or movable vertebrae (the upper three regions) present certain common characteristics that are best studied by examining one from the middle of the thoracic region.

URANUS

With a diameter of about 51,000 km, Uranus is the third largest planet in the solar system. It is a cold gas giant, believed to consist of a mixture of gas and ice around a solid core. Uranus is tilted at 97.9° and so rolls on its side along its orbital path. It has 15 moons, 10 rings made up of boulders, and a broad ring of dust.
Uranus presented a nearly featureless disk to Voyager 

Discovery 
Uranus had been observed on many occasions before its discovery as a planet, but it was generally mistaken for a star. The earliest recorded sighting was in 1690 when John Flamsteed observed the planet at least six times, cataloging it as 34 Tauri. The French astronomer Pierre Lemonnier observed Uranus at least twelve times between 1750 and 1769, including on four consecutive nights.
Sir William Herschel observed the planet on March 13, 1781 while in the garden of his house at 19 New King Street in the town of Bath, Somerset, England (now the Herschel Museum of Astronomy), but initially reported it (on April 26, 1781) as a "comet".Herschel "engaged in a series of observations on the parallax of the fixed stars", using a telescope of his own design.
He recorded in his journal "In the quartile near ζ Tauri … either [a] Nebulous star or perhaps a comet". On March 17, he noted, "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place". When he presented his discovery to the Royal Society, he continued to assert that he had found a comet while also implicitly comparing it to a planet:
"The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed."

Herschel notified the Astronomer Royal, Nevil Maskelyne, of his discovery and received this flummoxed reply from him on April 23: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it".
While Herschel continued to cautiously describe his new object as a comet, other astronomers had already begun to suspect otherwise. Russian astronomer Anders Johan Lexell was the first to compute the orbit of the new object and its nearly circular orbit led him to a conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn".] Bode concluded that its near-circular orbit was more like a planet than a comet.
The object was soon universally accepted as a new planet. By 1783, Herschel himself acknowledged this fact to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System." In recognition of his achievement, King George III gave Herschel an annual stipend of £200 on the condition that he move to Windsor so that the Royal Family could have a chance to look through his telescopes.


ORBIT AND ROTATION

Uranus revolves around the Sun once every 84 Earth years. Its average distance from the Sun is roughly 3 billion km (about 20 AU). The intensity of sunlight on Uranus is about 1/400 that on Earth. Its orbital elements were first calculated in 1783 by Pierre-Simon Laplace. With time, discrepancies began to appear between the predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus's orbit. On September 23, 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.
The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all giant planets, its upper atmosphere experiences very strong winds in the direction of rotation. At some latitudes, such as about two-thirds of the way from the equator to the south pole, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.
AXIAL TILT
Uranus has an axial tilt of 97.77 degrees, so its axis of rotation is approximately parallel with the plane of the Solar System. This gives it seasonal changes completely unlike those of the other major planets. Other planets can be visualized to rotate like tilted spinning tops on the plane of the Solar System, while Uranus rotates more like a tilted rolling ball. Near the time of Uraniansolstices, one pole faces the Sun continuously while the other pole faces away. Only a narrow strip around the equator experiences a rapid day-night cycle, but with the Sun very low over the horizon as in the Earth's polar regions. At the other side of Uranus's orbit the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness. Near the time of the equinoxes, the Sun faces the equator of Uranus giving a period of day-night cycles similar to those seen on most of the other planets. Uranus reached its most recent equinox on December 7, 2007.
Northern hemisphereYearSouthern hemisphere
Winter solstice1902, 1986Summer solstice
Vernal equinox1923, 2007Autumnal equinox
Summer solstice1944, 2028Winter solstice
Autumnal equinox1965, 2049Vernal equinox
One result of this axis orientation is that, on average during the year, the polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism which causes this is unknown. The reason for Uranus's unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth sized protoplanet collided with Uranus, causing the skewed orientation. Uranus's south pole was pointed almost directly at the Sun at the time of Voyager 2's flyby in 1986. The labeling of this pole as "south" uses the definition currently endorsed by the International Astronomical Union, namely that the north pole of a planet or satellite shall be the pole which points above the invariable plane of the Solar System, regardless of the direction the planet is spinning. A different convention is sometimes used, in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation. In terms of this latter coordinate system it was Uranus's north pole which was in sunlight in 1986.

Visibility

From 1995 to 2006, Uranus's apparent magnitude fluctuated between +5.6 and +5.9, placing it just within the limit of naked eye visibility at +6.5. Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter. At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars. In larger amateur telescopes with an objective diameter of between 15 and 23 cm, the planet appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania andOberon, may be visible.
INTERNAL STRUCTURE
Uranus's mass is roughly 14.5 times that of the Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times Earth's. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn. This value indicates that it is made primarily of various ices, such as water, ammonia, and methane. The total mass of ice in Uranus's interior is not precisely known, as different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses. Hydrogen and heliumconstitute only a small part of the total, with between 0.5 and 1.5 Earth masses.The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.
                                                                                     Diagram of the interior of Uranus
          The standard model of Uranus's structure is that it consists of three layers: a rocky (silicate/iron-nickel) core in the center, an icy mantle in the middle and an outer gaseous hydrogen/helium envelope. The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of Uranus's; the mantle comprises the bulk of the planet, with around 13.4 Earth masses, while the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus's radius. Uranus's core densityis around 9 g/cm3, with a pressure in the center of 8 million bars (800 GPa) and a temperature of about 5000 K. The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles. This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean. The bulk compositions of Uranus and Neptune are very different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.
While the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow science to determine which model is correct. The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers. For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of 25 559 ± 4 and 24 973 ± 20 km, respectively. This surface will be used throughout this article as a zero point for altitudes.

Internal heat

Uranus's internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux.Why Uranus's internal temperature is so low is still not understood. Neptune, which is Uranus's near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun. Uranus, by contrast, radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06 ± 0.08 times the solar energy absorbed in itsatmosphere. In fact, Uranus's heat flux is only 0.042 ± 0.047 W/m2, which is lower than the internal heat flux of Earth of about 0.075 W/m2. The lowest temperature recorded in Uranus's tropopause is 49 K (–224 °C), making Uranus the coldest planet in the Solar System.
One of the hypotheses for this discrepancy suggests that when Uranus was hit by a supermassive impactor, which caused it to expel most of its primordial heat, it was left with a depleted core temperature. Another hypothesis is that some form of barrier exists in Uranus's upper layers which prevents the core's heat from reaching the surface. For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport.
PLANETARY RING
Uranus has a complicated planetary ring system, which was the second such system to be discovered in the Solar System after Saturn's. The rings are composed of extremely dark particles, which vary in size from micrometers to a fraction of a meter. Thirteen distinct rings are presently known, the brightest being the ε ring. All except two rings of Uranus are extremely narrow—they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as result of those impacts only few particles survived in a limited number of stable zones corresponding to present rings.
The uranian ring system
William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, as the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to the Earth, its red color, and its apparent changes as Uranus traveled around the Sun. The ring system was definitively discovered on March 10, 1977 by James L. Elliot, Edward W. Dunham, and Douglas J. Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 by Uranus to study the planet's atmosphere. When their observations were analyzed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind the planet. They concluded that there must be a ring system around the planet. Later they detected four additional rings. The rings were directly imaged when Voyager 2 passed Uranus in 1986.Voyager 2 also discovered two additional faint rings bringing the total number to eleven.
In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located at twice the distance from the planet of the previously known rings. These new rings are so far from the planet that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13. In April 2006, images of the new rings with the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other red. One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light. In contrast, the planet's inner rings appear grey.
MAGNETIC FIELD
Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, astronomers had expected the magnetic field of Uranus to be in line with the solar wind, since it would then align with the planet's poles that lie in the ecliptic.
Voyager's observations revealed that the magnetic field is peculiar, both because it does not originate from the planet's geometric center, and because it is tilted at 59° from the axis of rotation. In fact the magnetic dipole is shifted from the center of the planet towards the south rotational pole by as much as one third of the planetary radius. This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 µT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 µT). The average field at the surface is 0.23 gauss (23 µT). In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator. The dipole moment of Uranus is 50 times that of Earth. Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants. One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giant planets, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean.
The magnetic field of uranus as observed by voyager 2 in
1986. S and N are magnetic south and north.
Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock located at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail and radiation belts. Overall, the structure of Uranus's magnetosphere is different from Jupiter's and more similar to Saturn's. Uranus's magnetotail trails behind the planet into space for millions of kilometers and is twisted by the planet's sideways rotation into a long corkscrew.
Uranus's magnetosphere contains charged particles: protons and electrons with small amount of H2+ ions. No heavier ions have been detected. Many of these particles probably derive from the hot atmospheric corona. The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively. The density of low energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3. The particle population is strongly affected by the Uranian moons that sweep through the magnetosphere leaving noticeable gaps. The particle flux is high enough to cause darkening or space weathering of the moon’s surfaces on an astronomically rapid timescale of 100,000 years. This may be the cause of the uniformly dark colouration of the moons and rings. Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles. Unlike Jupiter's, Uranus's aurorae seem to be insignificant for the energy balance of the planetary thermosphere.
MOONS
Uranus has 27 knownnatural satellites.The names for these satellites are chosen from characters from the works of Shakespeareand Alexander Pope. The five main satellites areMiranda, Ariel, Umbriel,Titania and Oberon.The Uranian satellite system is the least massive among the gas giants; indeed, the combined mass of the five major satellites would be less than half that of Triton alone. The largest of the satellites, Titania, has a radius of only 788.9 km, or less than half that of the Moon, but slightly more than Rhea, the second largest moon of Saturn, making Titania the eighth largest moon in the Solar System. The moons have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light). The moons are ice-rock conglomerates composed of roughly fifty percent ice and fifty percent rock. The ice may include ammonia and carbon dioxide.
The uranus system(NACO/VLT image)
Among the satellites, Ariel appears to have the youngest surface with the fewest impact craters, while Umbriel's appears oldest. Miranda possesses fault canyons 20 kilometers deep, terraced layers, and a chaotic variation in surface ages and features. Miranda's past geologic activity is believed to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a formerly present 3:1 orbital resonance with Umbriel. Extensional processes associated with upwelling diapirs are the likely origin of the moon's 'racetrack'-like coronae. Similarly, Ariel is believed to have once been held in a 4:1 resonance with Titania.

major moons of uranus in order of increasing distance (left to right), at
their proper relative sizes and albedos( collage of voyager 2 photographer) 
EXPLORATION
Crescent uranus as imaged by voyager 2 while
departing for neptune 
In 1986, NASA's Voyager 2 interplanetary probe encountered Uranus. This flyby remains the only investigation of the planet carried out from a short distance, and no other visits are currently planned. Launched in 1977, Voyager 2 made its closest approach to Uranus on January 24, 1986, coming within 81,500 kilometers of the planet's cloudtops, before continuing its journey to Neptune. Voyager 2 studied the structure and chemical composition of Uranus's atmosphere, including the planet's unique weather, caused by its axial tilt of 97.77°. It made the first detailed investigations of its five largest moons, and discovered 10 new moons. It examined all nine of the system's known rings, discovering two new ones. It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrewmagnetotail caused by Uranus's sideways orientation.
A Uranus orbiter and probe has been recommended by NASA's decadal survey; the proposal envisages launch during 2020–2023 and a 13-year cruise to Uranus