Saturday 1 October 2011

SECRETS OF AGING


Corbis, Owaki/KullaOnly 40 years ago it was widely believed that if you lived long enough, you would eventually experience serious cognitive decline, particularly with respect to memory. The implication was that achieving an advanced age was effectively equivalent to becoming senile—a word that implies mental defects or illness. As a graduate student back then, I was curious why such conclusions were being drawn about the elderly. I had only to look as far as my own great-grandmother and great-aunt to begin questioning the generalization. They lived to 102 and 93, respectively, and were exceptionally active and quick-witted enough to keep us twentysomethings on our toes. A closer look at the literature didn’t give me any confidence that either the biological basis of memory or how it might change with age was well understood. Many discoveries made in the years since have given us better tools to study memory storage, resulting in a major shift away from the view of “aging as a disease” and towards the view of “aging as a risk factor” for neurological diseases. So why do some people age gracefully, exhibiting relatively minor—and at worst annoying—cognitive changes, while others manifest significant and disabling memory decline? Answers to these questions are fundamental for understanding both how to prevent disease and how to promote quality of life.
A test for normal aging
Cognitive decline at older ages is marked by gradual loss in the ability to retain new information or to recall the past. Although many other changes occur during the aging process—impairment in range of motion, speed of gait, acuity of vision and hearing—none of these factors are as central to one’s personal identity as are one’s cumulative experiences. For me, it seemed that understanding memory, and how it changes with age or disease, was the key to understanding the aging brain.

When I was in graduate school, in 1973, two papers came out that would change the shape of neuroscience and memory research. Terje Lømo, Tim Bliss, and Tony Gardner-Medwin devised a method that could experimentally alter synaptic strength. Earlier theoretical ideas had implied that strengthening the connection between neurons might explain how memories were formed: the stronger the connection, the better a message could be relayed or recalled. As electrophysiologists, Lømo and colleagues tested this idea by sending electrical currents through a group of neurons in the rabbit hippocampus in patterns that mimicked normal electrical activity, and measuring how the connections between the neurons changed. By stimulating neurons, they could create a durable increase in synaptic strength, which was later dubbed long-term potentiation, or LTP. But more than just an intriguing physiological observation, the research presented a fully controllable experimental proxy for learning that made it possible to study memory across the lifespan of an animal for the first time.
Thus, with the discovery of LTP, and the idea that animal models of human memory could be developed, I sought to finish my dissertation research in a lab that was routinely conducting LTP experiments. At the time, there were only three labs in the world that were doing these types of experiments: Graham Goddard’s laboratory at Dalhousie University, Tim Bliss’s at University College London, and Gary Lynch’s at UC, Irvine. As it turned out, my thesis advisor Peter Fried, at Carleton University in Ottawa, had been one of Goddard’s students, and introduced me to him by saying, “I don’t really want you to steal my student, but she has this idea that she thinks you might be interested in.” I explained to Goddard that I wanted to track an animal’s ability to make and keep spatial memories as it aged. Since the hippocampus—the area of the brain where LTP was first discovered—was also involved in spatial memory, I wanted to use Goddard’s setup and surgically implant electrodes that could measure LTP in awake, freely behaving rats. In an act of faith and generosity, Goddard soon purchased animals for me and began to “age” them in anticipation of my arrival the following year.
The behavioral tests used to study spatial memory at this time all used somewhat severe methods to motivate animals: either with shock—a significant stress—or by restricting eating or drinking, both of which were potentially detrimental to survival. Because the precious aged animals could not be replaced without waiting two years to grow another “old” batch, I developed a novel, milder task that is now often referred to as the Barnes maze. With rodents’ natural aversion to open, well-lit areas in mind, I designed a circular platform with many openings around its circumference, only one of which would allow the animal to escape into a dark box beneath the platform’s surface.
Aging is not equivalent to the aberrant process of Alzheimer’s disease; it is in fact a distinctly different neurological process.
At 2–3 years old (an age equivalent to about 70-80 human years), the rats were old enough to begin the behavior and electrophysiological experiments in the latter part of 1975. The animals were permanently implanted with electrodes that could both stimulate the neurons and record their activity. The electrodes allowed us to measure baseline synaptic transmission with single stimuli, then to induce LTP, and finally to monitor its decay over several weeks. We found that LTP decayed about twice as fast in the old rats as it did in the young, with the “synaptic memory” lasting 20 days rather than 40. Most importantly, the durability of LTP was correlated with the memory of the correct hole on the circular platform task. In fact, by combining behavior and electrophysiology techniques, we produced the first demonstration that instead of acting as a mere proxy for learning, LTP might actually represent the cellular mechanism by which all memories are created.1
Just one year before I published my work on aging rats, Bruce McNaughton, then at Dalhousie University, demonstrated that for synaptic strengthening to occur, synapses from several neurons needed to converge and be coactive on the target neuron. The finding made perfect sense, since learning often comes from the association of two or more pieces of information, each of which could be carried by an individual neuron. Later experiments also demonstrated that under some conditions, LTP can be more difficult to induce in aged rats, and conversely, that a reduction in synaptic strength—called long-term depression or LTD—is easier to induce in the hippocampus of old rats.2 This may mean that LTD participates in “erasing” LTP, or reducing its longevity, and thus possibly increases forgetting.
By the mid-1980s it was clear that even in normal aging there are subtle changes in the biological processes that underlie memory, and in the relationship between the strength of memory and the strength of the synapses. But the real message of these experiments was that old animals can and do learn. None of the older animals exhibited what could be considered behaviorally to be “dementia,” nor were their biological mechanisms that allow memory traces to be laid down completely dysfunctional.

Aging networks

While there is much to be learned about the physiology of neural systems by direct artificial stimulation, even the mildest currents do not exactly mimic the selective and distributed activity of cells in normal behavioral states. To more realistically understand the aging brain, it is necessary to monitor cell circuits driven by natural behaviors. Around the time that LTP was discovered, John O’Keefe at University College London discovered “place cells” in the hippocampus—a major breakthrough for the field. By simply placing small microwires very close to hippocampal cells, without stimulating them, John and his student Jonathan Dostrovski noted that individual hippocampal cells discharge action potentials only when a rat is at select physical locations as it traverses a given environment.3 The unique property of these place cells is that their “receptive fields,” or the features that make each of these cells fire, are related to specific regions of space.

Infographic: Lost in Space
View full size JPG | PDFTAMI TOLPA (MAZE); CAROL BARNES (COGNITIVE MAPS, SOURCE: NATURE 388/ 17 JULY 1997)
By recording from many individual place cells at once, using a multipronged electrode (the tetrode) developed by McNaughton, and determining which physical locations activated different cells in the brain, it was possible to visualize how the hippocampus constructs a “cognitive map” of the surroundings. If young rats were given one maze-running session in the morning, and another session in the afternoon, the same hippocampal map was retrieved at both time points—suggesting that the pattern of neuronal firing represented the memory of their walking through the rectangular, figure-eight maze. What surprised us was that about 30 percent of the time, when old rats went back for their second run of the day, a different place-cell distribution would appear—as if the older animals were retrieving the wrong hippocampal map.4 Certainly a rat navigating a familiar space with the wrong map is likely to appear lost, or as though it had forgotten its way. But why was the rat retrieving the wrong map?
The possible answer to this question was published in 1998 by Cliff Kentros, now at the University of Oregon. In young rats, Cliff pharmacologically blocked the NMDA receptor, a critical gateway regulating LTP and synaptic strengthening. When he and colleagues used a dose large enough to block the formation of LTP, the cognitive maps of the young rats began to degrade over time in much the same way as observed in aged rats, suggesting that faulty LTP mechanisms may be responsible for map instability in aging.5

Connecting the hippocampal dots

Even though the new multiple tetrode recording method was a large advance over the limitations of recording one or two neurons at a time, these methods were still constrained to sample sizes of ~100 to 200 cells.
One recent advance has allowed us to monitor cell activity on a broader scale, not with electrodes, but by tracking the expression of genes that are rapidly expressed during behavior. Monitoring the expression of one such gene, Arc, during behavioral tests, for example, John Guzowski in my lab developed a method that could report on activity over hundreds of thousands of cells across the brain (the ‘catFISH’ method).6 This large-scale imaging of single cells has been critical for identifying which cells, within which memory circuits, are altered during normal aging.
This technique also allowed us to tease apart which cells might be more susceptible to decline with age. Unlike electrodes that record LTP, which cannot differentiate between different types of neurons, the catFISH method allows us to distinguish between the three primary cell groups within the hippocampus—the granule cells of the dentate gyrus and the pyramidal cells of regions CA1 and CA3. We were able to show that cell-specific gene expression (and therefore cell activity) did not change with age in CA1 and CA3 cell regions, but that the number of granule cells engaged during behavior showed a continuous decline in aging rats, suggesting these cells are a weak link in the aging rat hippocampus.7 Could this also be true in primates?
Scott Small at Columbia University Medical Center helped me answer this question in young and old monkeys. Although MRI methods do not have single-cell resolution, they can provide an indirect gauge of activity over large segments of the brain. Using a variation of standard MRI that measures regional cerebral blood volume (CBV), we monitored the resting brain activity of monkeys ranging in age from 9 to 27 years (equivalent to 27 to 87 human years), and could then relate this brain activation to an individual animal’s performance in memory tests. There were no differences in resting metabolic measures in the areas of the brain that send most of the signals into or out of the hippocampus, nor were there differences in brain activity within CA1 or CA3. But the old monkeys did show reduced activity in the dentate gyrus, similar to what we had found in aging rats. Critically, we observed that the lower the activity in the dentate gyrus, the poorer the memory.7
Scott had observed a similar pattern in an earlier human aging study. However, with human studies there is always a concern that some of the people we assume are aging normally in fact have the undiagnosed beginnings of human-specific neurological disease, such as Alzheimer’s. Confirming the observation in aging animals, which do not spontaneously get this disease, provides strong evidence that the dentate gyrus is a major player in the changes that occur in normally aging mammals.
Furthermore, these data refute the contention that aging is effectively equivalent to an aberrant process like Alzheimer’s disease, revealing that it is in fact a distinctly different neurological process. In contrast to the results showing that aging primarily affects the dentate gyrus, the granule cells of this brain region in Alzheimer’s patients do not show changes (with respect to age-matched controls) until very late in the illness. Instead, it is the cells in CA1 and in the entorhinal cortex that are dramatically affected. Thus, while aging and Alzheimer’s may in some cases be “superimposed,” there is not a simple linear trajectory that leads us all to end up with the disease.

Memory genes


Infographic: The Seat of Memory
View full size JPG | PDFTAMI TOLPA
Presently, the omics revolution is leading us closer to an understanding of individual differences in aging and cognition. One example with respect to memory is a study that used a genome-wide association approach in 351 young adults to identify the single nucleotide polymorphisms (SNPs) most strongly related to memory. The most significant SNP identified was a simple nucleotide base change in the normal cytosine-thymine sequence at a specific location in the KIBRA gene, which encodes a neural protein that had been suspected of playing a role in memory. Those who had the thymine-thymine SNP had the best memories.8 This was further confirmed in an independent elderly population,9supporting the view that this particular SNP in the KIBRA gene predisposed people of any age to have “better memory.” This could be another clue to help guide our understanding of the molecular cascades that support good memory, as the activity of the KIBRA protein may influence mechanisms related to LTP. Could there be a link between KIBRA and age-related memory decline? One way to test this hypothesis was to find a way to manipulate part of the molecular circuit in which KIBRA is involved. Performing just such an experiment in collaboration with my lab, Matt Huentelman at the Translational Genomics Research Institute in Arizona identified a promising compound, hydroxyfasudil, a metabolite of a drug already approved for use in treatment of chest pain, or angina. As predicted, hydroxyfasudil improved spatial learning and working memory in aged rats.10The clinical applications of the drug’s effects on learning and memory are currently being explored.
David Sweatt at the University of Alabama at Birmingham also recently documented DNA methylation changes in response to learning, and was very interested in collaborating with us to determine whether changes in this epigenetic process during aging might explain our data on Arc expression, which is related to LTP formation and learning. Methylation of DNA typically downregulates transcription. After young and old rats performed exploratory behaviors that engaged the hippocampus, the methylation state of the Arc gene in hippocampal cells was, in fact, reduced in both age groups (which allowed more of the Arc mRNA to be transcribed). The difference between young and old animals was that certain cells of older animals, such as hippocampal granule cells, had higher overall methylation levels in the resting state, which resulted in production of less Arc protein during behavior, and the diminished ability to alter synapses via a mechanism such as LTP.11 There are many complex facets of these data, but the results have led us and others to hypothesize that there may well be a number of epigenetic marks that accumulate during aging, and that epigenetic dysregulation may be a fundamental contributor to normal age-related cognitive decline. While the details of exactly how aging affects epigenetic modifications remain to be elucidated, it is at least a reasonable guess that understanding these mechanisms will be critical for future experimental designs aimed at optimizing cognition across the life span. Luckily, these processes are also amenable to pharmacological manipulation.

Aging in the future

Looking back on the rather grim expectations concerning memory and the elderly that were held only a few decades ago, the vision today is very different and much more positive. There are many who live to very old ages with minimal cognitive decline—and certainly no dementia. One particularly interesting study in this regard followed individuals who were 100 years of age (centenarians) at the beginning of the study until the time of their death, monitoring cognitive function and other factors in the “oldest old.” Interestingly, 73 percent of these centenarians were dementia free at the end of their lives (the oldest reaching an age of 111 years).12 Watching the remarkable discoveries in biology over the past half century, one cannot help but look with excitement towards the next groundbreaking findings that are surely in the making. The future holds great promise for the once remote dream of understanding the core biological processes required for optimal cognitive health during aging—and progress in this regard should also provide the needed backdrop for understanding and preventing the complex neurological diseases that can be superimposed on the aging brain.

Q&A: Overhul the funding system



It’s a perennial complaint among scientists: grant writing takes up too much time. “I think that there’s pretty much wide agreement that scientists have to spend a lot of time to write proposals, to review proposals, to write progress reports, final reports, and do lots of things that are not necessarily contributing to science,” said John Ioannidis, a professor at Stanford University’s School of Medicine.
Flickr, Thomas HawkBut it isn’t just time that’s spent unwisely, but billions and billions of dollars that could be allocated in smarter ways, Ioannidis wrote in a comment in today’s NatureThe Scientist spoke with Ioannidis about his ideas to fix science funding in the United States.
The Scientist: What are the current problems with the way science is funded in America?
John Ioannidis:I think that the way that the funding system works, people have to promise something that is exaggerated. They have to compete against others who are promising something that would be spectacular, and therefore they need to make a huge claim that often they may not be able to deliver, just because of the odds of science.
Or they have to promise something that would be highly predictable. It’s something that they may have already done or that they know what the answer is going to be…So in a sense I think the current system is almost destroying the potential for innovative ideas and in many cases it even fosters mediocrity.
TS: What’s the best way to improve upon the current funding paradigm?
JI: What I advocated in that comment was we need to have some studies to directly test, ideally in a randomized fashion, whether one strategy performs better than another. One could think about pilot projects where you have consenting scientists who say I’m willing to be randomized to funding scheme A versus funding scheme B. A could be lottery allocation, for example, and B could be merit-based. If we run these types of studies, in a few years we can see what these scientists have done in the short term. Maybe in the longer term we can also see whether their research really made a difference. I’m in favor of experimenting, because we’re spending billions and billions of dollars and we’re doing that with no good evidence really.

John Ioannidis, Stanford University School of MedicineJOHN IOANNIDIS
TS: In your proposal for funding scientists according to merit, you mention independent indices to measure a proposal’s worth. What are some examples?
JI: I think that publications and citations should not be undervalued. They could play a very important role in that assessment. But maybe combining indices and paying attention to quality aspects other than quantity could be informative. One could think of merging indices that exclude self-citation or take into account the impact of papers rather than the amount of papers and adjust for co-authorship. There are indices to do that, and I think they can be objective if you combine them.
On top of this we have the opportunity to build additional information into the profile of an investigator based on scientific citizenship practices. Things like sharing of data or protocols, and how often these data and protocols are utilized by other researchers, how much do they contribute to other aspects of improving science.
I think it’s an opportunity: if we really feel that some practices are essential for science, and we want to improve these practices, then tying them to the funding mechanism is really the prime way to make them happen.
TS: In this kind of scheme is seems like weight might be given to more established scientists. What are ways to help younger faculty who don’t have such a track record?
JI: I would think that this is actually a problem of the current system, rather than any proposal to create something new. The average age for an investigator to get his first RO1 currently in the states is about 40 or 41 years old. People are going through 15 years of being active researchers, and they still don’t have independence. So I think that a system that is based on merit could really get younger scientists on board much faster than the current system.
Moreover, for example, if you use citation indices, you can always adjust for the number of years that someone has been active. And there’s some evidence that the track record of some individuals could be identified from fairly early in their careers.
I would also think that for young investigators it’s probably okay to move closer to the possibility of an egalitarian sharing, so give some opportunity to lots of people early on for a few years and see what they can do. They will build a track record within a few years, and then you can start having some impact measures or citizenship practice measures.
TS: How do we balance the interests of funding sure-bet science or translational research versus very risky or basic science?
JI: This is not an easy question. I think people have a different perception of what would be the appropriate allocation between these two strategies. It has to be a common consensus in the scientific community of what we really want to do. Currently I think innovative research is undervalued and not given enough opportunities to succe.


Missegregation linked to DNA Damage


Tumor formation may require fewer steps to get started than previously thought, according to a new study that shows how chromosome instability (CIN) and DNA damage—two tumorigenesis triggers typically considered independent phenomena—can arise from a single defect in how chromosomes segregate during cell division.
“This paper really provides a link between the mechanism behind CIN and the mechanism underlying chromosome damage,” said Dartmouth University biochemist Duane Compton, who was not involved in the research. Prior to this study, most researchers were not investigating how these two phenomena might be related, he added.
The results, published today (September 29) inScience, hint at new avenues for developing anti-cancer therapeutics that target chromosome missegregation as a central event in the development of both abnormal chromosome number and structural DNA damage as tumorogenesis precursors.
Lead author Aniek Janssen, graduate student at University Medical Center (UMC) Utrecht, in The Netherlands,  got the idea to study the relationship between CIN and DNA damage after her mentors, cell and molecular biologists Rene Medema and Geert Kops, noticed that most tumor cells exhibited both aneuploidy (abnormal chromosome number) and structural DNA damage. To divide normally, cells must ensure that their chromosomes properly segregate, with each daughter cell receiving a copy of the duplicated chromosomes. Aneuploidy results from missegregation of chromosomes—when one daughter cell receives two copies of a specific chromosome, and the other receives none. If a missing chromosome contained a gene encoding a protein important for DNA repair, chromosome loss might lead indirectly to DNA damage, explained Kops. In later divisions, a cell missing this protein would not be able to repair any broken DNA, which tends to accumulate with more replication and cell division. But Kops and Medema noticed that DNA damage sometimes occurred within a single division after chromosome missegregation, before the daughter cells would need to start synthesizing new DNA repair proteins, leading them to hypothesize that missegregation was leading directly to damaged DNA.
To cause aneuploidy in dividing human retinal pigment epithelial cells, Janssen treated them with a chemical that causes spindle fibers to connect a single centromere to both poles of the cell, so that the fibers wouldn’t properly split the paired chromosomes. Within hours after division occurred, real-time videos tracking fluorescent staining for DNA damage showed a missegregation event correlated with higher levels of DNA damage.
“The strongest evidence for associating CIN with DNA damage is the live imaging,” said Compton. Janssen and her colleagues also showed that double-stranded DNA breaks occurred, suggesting that the chromosomes were breaking into smaller pieces.
Blocking cytokinesis, the physical division of cells where the membrane pinches in and daughter cells separate, reduced the occurrence of the DNA damage, suggesting that the damage occurs early in the process, Kops said. Though how this leads to DNA breaks is unclear, the damage resulted in structural aberrations including chromosomal translocation, a genetic abnormality often seen in cancer cells.
DNA damage is classically thought of as being the result of mechanisms other than missegregation, said Medema. A defect in DNA repair genes would lead to improper mending of breaks, leading to mutations that might result in cancer.
“But our study shows that cells can get translocations just by being chromosome unstable,” said Medema. The pieces of broken chromosomes could segregate into daughter cells, and be grafted onto a different chromosome via normal repair mechanisms—resulting in a translocation even with adequate DNA repair machinery.
The study’s results should prompt reevaluation of the fields understanding of how the chromosomal aberrations that underlie tumorigenesis occur, said Medema. He explained that the work, which was supported by Top Institute Pharma, and included partners at Merck Sharp & Dohme (MSD), the Utrecht University Medical Center, and The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, could inform strategies for designing new therapies. Enhancing genomic damage to induce cell death is one therapy currently being pursued, and this study suggests that it might be achievable by targeting the mechanism behind missegregation alone.

Longer Cosmic ruler based on Black Hole


Astronomers have a new gadget in their cosmic toolbox that is capable of measuring distances to very faraway objects. The method uses active galactic nuclei, the bright, violent regions at the centers of galaxies, to gauge distances farther than existing cosmic yardsticks can reach..
Having such an odometer is crucial for understanding how space, time and matter behave at the scale of the whole universe, and could help solve mysteries such as the nature of the dark energy that is accelerating the expansion of the universe.
For four decades, astronomers have been trying to turn these luminous beacons into cosmic mile markers.  Now, scientists at the University of Copenhagen’s Dark Cosmology Centre and their collaborators think they’ve got it worked out. The brightness of an active nucleus is tightly related to the radius of a region of hot gases surrounding the central black hole. When scientists determine that radius, they can predict how intrinsically bright the nucleus should be — and compare that value to how bright it appears, which depends on distance. Astronomers call objects whose properties can be used to predict their actual brightness standard candles.
“It’s the radius-luminosity relationship that allows us to assume that these active galactic nuclei are standard candles,” says postdoctoral fellow Kelly Denney of the Dark Cosmology Centre, an author of the study, which will appear in the Astrophysical Journal.
The stable of standard candles already houses type 1a supernovas and Cepheid variable stars, which have predictable luminosities but are good only for measuring distances to objects present when the universe was nearly 4 billion years old. Active galactic nuclei would extend that capability to objects at distances corresponding to when the universe was only 1.5 billion years old.
“Right now we rely so much on supernovas, it would be really nice to have independent verification of cosmological parameters,” says astrophysicist Bradley Peterson of Ohio State University.  “I’m really excited about this result.”
A technique called reverberation mapping measures how long it takes photons being kicked out of the black hole’s immediate neighborhood to reappear after they’ve traversed the hot, gassy maelstrom surrounding the black hole. Because light travels at a constant speed, astronomers can determine the gassy region’s radius. Then, the luminosity of the active galactic nucleus can be calculated.
Until now, tightening the relationship between radius and luminosity has been tricky. Among other reasons, starlight from a host galaxy contaminated the brightness measurements of its active nucleus. But the team had in hand data from astrophysicist Misty Bentz of Georgia State University that corrected for the effects of surrounding starlight, in addition to Denney’s own precise measurements of radii.
“I think this paper is really clever,” Bentz says. “But it needs a few improvements before it can be comparable to supernovae and stuff like that.”
Denney says the team is planning on calibrating the method using observations of additional galaxies and — if given access to the Hubble Space Telescope — measurements produced by Cepheid variable stars.
Peterson says the method will probably take a decade to catch on within the astrophysics community. It will take that long to expand the dataset, add more distance calibrations, and reduce some of the noise in the data.
But the team is optimistic that active galactic nuclei will become an accepted distance measure, since they are more numerous and more easily observable than existing standard candles.  “It could end up being one very large rung on the distance ladder,” Bentz says. “It would put everything on the same scale, instead of using one thing to calibrate something else that’s used to calibrate something else.… There’s less potential for something to go wrong.”

VENUS


Venus is the second planet from the Sun, orbiting it every 224.7 Earth days. The planet is named after Venus, the Roman goddess of love and beauty. After the Moon, it is the brightest natural object in the night sky, reaching an apparent magnitude of −4.6, bright enough to cast shadows. Because Venus is an inferior planet from Earth, it never appears to venture far from the Sun: its elongation reaches a maximum of 47.8°. Venus reaches its maximum brightness shortly before sunrise or shortly after sunset, for which reason it has been known as the Morning Star or Evening Star.
Venus Real
Venus is classified as a terrestrial planet and it is sometimes called Earth's "sister planet" due to the similar size, gravity, and bulk composition. Venus is covered with an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space invisible light. Venus has the densest atmosphere of all the terrestrial planets in the Solar System, consisting mostly of carbon dioxide. Venus has no carbon cycle to lock carbon back into rocks and surface features, nor does it seem to have any organic life to absorb it in biomass. Venus is believed to have previously possessed Earth-like oceans, but these evaporated as the temperature rose. Venus's surface is a dusty dry desertscape with many slab-like rocks, periodically refreshed by volcanism. The water has most likely dissociated, and, because of the lack of a planetary magnetic field, the hydrogen has been swept into interplanetary space by the solar wind. The atmospheric pressure at the planet's surface is 92 times that of the Earth.The Venusian surface was a subject of speculation until some of its secrets were revealed byplanetary science in the twentieth century. It was finally mapped in detail by Project Magellanin 1990–91. The ground shows evidence of extensive volcanism, and the sulfur in the atmosphere may indicate that there have been some recent eruptions. The absence of evidence of lava flow accompanying any of the visible caldera remains an enigma. The planet has few impact craters, demonstrating that the surface is relatively young, approximately 300–600 million years old. There is no evidence for plate tectonics, possibly because its crust is too strong to subduct without water to make it less viscous. Instead, Venus may lose its internal heat in periodic major resurfacing events.
PHYSICAL CHARACTERISTICS
Venus is one of the four solar terrestrial planets, meaning that, like the Earth, it is a rocky body. In size and mass, it is very similar to the Earth, and is often described as Earth's "sister" or "twin". The diameter of Venus is only 650 km less than the Earth's, and its mass is 81.5% of the Earth's. Conditions on the Venusian surface differ radically from those on Earth, due to its dense carbon dioxide atmosphere. The mass of the atmosphere of Venus is 96.5% carbon dioxide, with most of the remaining 3.5% being nitrogen.\
INTERNAL STRUCTURE
Without seismic data or knowledge of its moment of inertia, there is little direct information about the internal structure and geochemistry of Venus. The similarity in size and density between Venus and Earth suggests that they share a similar internal structure: a core,mantle, and crust. Like that of Earth, the Venusian core is at least partially liquid because the two planets have been cooling at about the same rate. The slightly smaller size of Venus suggests that pressures are significantly lower in its deep interior than Earth. The principal difference between the two planets is the lack of plate tectonics on Venus, likely due to the dry surface and mantle. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field.
GEOGRAPHY
Map of Venus, Showing the elevated "contains"
in yellow ishtar Terra at the top and Aphrodite Terra just Below
the equator to the right .
About 80% of the Venusian surface is covered by smooth volcanic plains, consisting of 70% plains with wrinkle ridges and 10% smooth or lobate plains. Two highland "continents" make up the rest of its surface area, one lying in the planet's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra, after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is 11 km above the Venusian average surface elevation. The southern continent is called Aphrodite Terra, after the Greek goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area. As well as the impact craters, mountains, and valleys commonly found on rocky planets, Venus has a number of unique surface features. Among these are flat-topped volcanic features called farra, which look somewhat like pancakes and range in size from 20–50 km across, and 100–1,000 m high; radial, star-like fracture systems called novae; features with both radial and concentric fractures resembling spider webs, known as arachnoids; andcoronae, circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin.
Most Venusian surface features are named after historical and mythological women.Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regionsAlpha Regio, Beta Regio and Ovda Regio. The former three features were named before the current system was adopted by the International Astronomical Union, the body that oversees planetary nomenclature.
The longitudes of physical features on Venus are expressed relative to its prime meridian. The original prime meridian passed through the radar-bright spot at the center of the oval feature Eve, located south of Alpha Regio. After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne.

Friday 30 September 2011

VALCANOS

Most volcanos occur at the edge of tectonic plates, where magma can rise to the surface. A volcano’s shape depends mainly on the viscosity of the lava, the shape of the vent, the amount of ash, and the frequency and size of the eruptions. In fissure through a crack, forming lava plateaux or plains. In cone-shaped volcanoes, the more viscous the lava, the steeper the cone. Some cones fall in on themselves or are exploded outwards, forming calderas.

Cross-section through a stratovolcano (vertical scale is exaggerated):
1. Large magma chamber
2. Bedrock
3. Conduit (pipe)
4. Base
5. Sill
6. Dike
7. Layers of ash emitted by the volcano
8. Flank
9. Layers of lava emitted by the volcano
10. Throat
11. Parasitic cone
12. Lava flow
13. Vent
14. Crater
15. Ash cloud
Conical mount Funji in japan, at sunrise from
lake Kawaguchi (2005)
Mayon, near-perfect stratovolcano in the
Philipines
Pinatubo ash plume reaching a height of 19km, 3 days
before the climate eruption of 15 june  1991





VERTEBRA

Divisions of
spinal segments
There are three major types of vertebra in the human spine: limbar, thoracic, and cervical. Lumbar vertebrae support a major part of the body’s weight and so are cmparatively large and strong. Cervical vertebrae support the head and neck.


IN HUMANS
There are normally thirty-three (33) vertebrae in humans, including the five that are fused to form the sacrum (the others are separated by intervertebral discs) and the fourcoccygeal bones that form the tailbone. The upper three regions comprise the remaining 24, and are grouped under the names cervical (7 vertebrae), thoracic (12 vertebrae) andlumbar (5 vertebrae), according to the regions they occupy. This number is sometimes increased by an additional vertebra in one region, or it may be diminished in one region, the deficiency often being supplied by an additional vertebra in another. The number of cervical vertebrae is, however, very rarely increased or diminished.
Oblique view of cervical vertebrae
With the exception of the first and second cervical, the true or movable vertebrae (the upper three regions) present certain common characteristics that are best studied by examining one from the middle of the thoracic region.