Saturday, November 21, 2009

WIND ENERGY

The Earth is unevenly heated by the sun, such that the poles receive less energy from the sun than the equator; along with this, dry land heats up (and cools down) more quickly than the seas do. The differential heating drives a global atmospheric convection system reaching from the Earth's surface to the stratosphere which acts as a virtual ceiling. Most of the energy stored in these wind movements can be found at high altitudes where continuous wind speeds of over 160 km/h (99 mph) occur. Eventually, the wind energy is converted through friction into diffuse heat throughout the Earth's surface and the atmosphere.
The total amount of economically extractable power available from the wind is considerably more than present human power use from all sources.An estimated 72 TW of wind power on the Earth potentially can be commercially viable,compared to about 15 TW average global power consumption from all sources in 2005. Not all the energy of the wind flowing past a given point can be recovered (see Betz' law).
The strength of wind varies, and an average value for a given location does not alone indicate the amount of energy a wind turbine could produce there. To assess the frequency of wind speeds at a particular location, a probability distribution function is often fit to the observed data. Different locations will have different wind speed distributions. The Weibull model closely mirrors the actual distribution of hourly wind speeds at many locations. The Weibull factor is often close to 2 and therefore a Rayleigh distribution can be used as a less accurate, but simpler model.
Because so much power is generated by higher wind speed, much of the energy comes in short bursts. The 2002 Lee Ranch sample is telling; half of the energy available arrived in just 15% of the operating time. The consequence is that wind energy from a particular turbine or wind farm does not have as consistent an output as fuel-fired power plants; utilities that use wind power provide power from starting existing generation for times when the wind is weak thus wind power is primarily a fuel saver rather than a capacity saver. Making wind power more consistent requires that various existing technologies and methods be extended, in particular the use of stronger inter-regional transmission lines to link widely distributed wind farms. Problems of variability are addressed by grid energy storage, batteries, pumped-storage hydroelectricity and energy demand management.
Since wind speed is not constant, a wind farm's annual energy production is never as much as the sum of the generator nameplate ratings multiplied by the total hours in a year. The ratio of actual productivity in a year to this theoretical maximum is called the capacity factor. Typical capacity factors are 20–40%, with values at the upper end of the range in particularly favourable sites. For example, a 1MW turbine with a capacity factor of 35% will not produce 8,760 MWh in a year (1 × 24 × 365), but only 1 × 0.35 × 24 × 365 = 3,066 MWh, averaging to 0.35 MW. Online data is available for some locations and the capacity factor can be calculated from the yearly output.
Unlike fueled generating plants, the capacity factor is limited by the inherent properties of wind. Capacity factors of other types of power plant are based mostly on fuel cost, with a small amount of downtime for maintenance. Nuclear plants have low incremental fuel cost, and so are run at full output and achieve a 90% capacity factor. Plants with higher fuel cost are throttled back to follow load. Gas turbine plants using natural gas as fuel may be very expensive to operate and may be run only to meet peak power demand. A gas turbine plant may have an annual capacity factor of 5–25% due to relatively high energy production cost.
According to a 2007 Stanford University study published in the Journal of Applied Meteorology and Climatology, interconnecting ten or more wind farms can allow an average of 33% of the total energy produced to be used as reliable, baseload electric power, as long as minimum criteria are met for wind speed and turbine height.

Friday, November 20, 2009

HEALTH AND NUTRITION

Nutrition (also called nourishment or aliment) is the provision, to cells and organisms, of the materials necessary (in the form of food) to support life. Many common health problems can be prevented or alleviated with a healthy diet.
The diet of an organism is what it eats, and is largely determined by the perceived palatability of foods. Dietitians are health professionals who specialize in human nutrition, meal planning, economics, and preparation. They are trained to provide safe, evidence-based dietary advice and management to individuals (in health and disease), as well as to institutions.
A poor diet can have an injurious impact on health, causing deficiency diseases such as scurvy, beriberi, and kwashiorkor; health-threatening conditions like obesity and metabolic syndrome, and such common chronic systemic diseases as cardiovascular disease, diabetes, and osteoporosis.
There are seven major classes of nutrients: carbohydrates, fats, fiber, minerals, protein, vitamin, and water.
These nutrient classes can be categorized as either macronutrients (needed in relatively large amounts) or micronutrients (needed in smaller quantities). The macronutrients are carbohydrates, fats, fiber, proteins, and water. The micronutrients are minerals and vitamins.
The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), energy. Some of the structural material can be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital C to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats provide 37 kJ (9 kcal) per gram.[1], though the net energy from either depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (ie, non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids, some of which are essential in the sense that humans cannot make them internally. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other micronutrients include antioxidants and phytochemicals which are said to influence (or protect) some body systems. Their necessity is not as well established as in the case of, for instance, vitamins.
Most foods contain a mix of some or all of the nutrient classes, together with other substances such as toxins or various sorts. Some nutrients can be stored internally (eg, the fat soluble vitamins), while others are required more or less continuously. Poor health can be caused by a lack of required nutriHeart disease, cancer, obesity, and diabetes are commonly called "Western" diseases because these maladies were once rarely seen in developing countries. One study in China found some regions had essentially no cancer or heart disease, while in other areas they reflected "up to a 100-fold increase" coincident with diets that were found to be entirely plant-based to heavily animal-based, respectively.[24] In contrast, diseases of affluence like cancer and heart disease are common throughout the United States. Adjusted for age and exercise, large regional clusters of people in China rarely suffered from these "Western" diseases possibly because their diets are rich in vegetables, fruits and whole grains.[24]
The United Healthcare/Pacificare nutrition guideline recommends a whole plant food diet, and recommends using protein only as a condiment with meals. A National Geographic cover article from November, 2005, entitled The Secrets of Living Longer, also recommends a whole plant food diet. The article is a lifestyle survey of three populations, Sardinians, Okinawans, and Adventists, who generally display longevity and "suffer a fraction of the diseases that commonly kill people in other parts of the developed world, and enjoy more healthy years of life." In sum, they offer three sets of 'best practices' to emulate. The rest is up to you. In common with all three groups is to "Eat fruits, vegetables, and whole grains."
The National Geographic article noted that an NIH funded study of 34,000 Seventh-day Adventists between 1976 and 1988 "…found that the Adventists' habit of consuming beans, soy milk, tomatoes, and other fruits lowered their risk of developing certain cancers. It also suggested that eating whole grain bread, drinking five glasses of water a day, and, most surprisingly, consuming four servings of nuts a week reduced their risk of heart disease."`ents or, in extreme cases, too much of a required nutrient. For example, both salt and water (both absolutely required) will cause illness or even death in too large amounts.

Thursday, November 19, 2009

ABOUT MAGNETISM

The term magnetism is used to describe how materials respond on the microscopic level to an applied magnetic field; to categorize the magnetic phase of a material. For example, the most well known form of magnetism is ferromagnetism such that some ferromagnetic materials produce their own persistent magnetic field. However, all materials are influenced to greater or lesser degree by the presence of a magnetic field. Some are attracted to a magnetic field (paramagnetism); others are repulsed by a magnetic field (diamagnetism); others have a much more complex relationship with an applied magnetic field. Substances that are negligibly affected by magnetic fields are known as non-magnetic substances. They include copper, aluminium, water, and gases.
The magnetic state (or phase) of a material depends on temperature (and other variables such as pressure and applied magnetic field) so that a material may exhibit more than one form of magnetism depending on its temperature, etc.
In magnetic materials, the most important sources of magnetization are, more specifically, the electrons' orbital angular motion around the nucleus, and the electrons' intrinsic magnetic moment (see Electron magnetic dipole moment). The other potential sources of magnetism are much less important: For example, the nuclear magnetic moments of the nuclei in the material are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. (Nuclear magnetic moments are important in other contexts, particularly in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI).)
Ordinarily, the countless electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments (as a result of the Pauli exclusion principle; see Electron configuration), or combining into "filled subshells" with zero net orbital motion; in both cases, the electron arrangement is so as to exactly cancel the magnetic moments from each electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions, so that the material will not be magnetic.
However, sometimes (either spontaneously, or owing to an applied external magnetic field) each of the electron magnetic moments will be, on average, lined up. Then the material can produce a net total magnetic field, which can potentially be quite strong.
The magnetic behavior of a material depends on its structure (particularly its electron configuration, for the reasons mentioned above), and also on the temperature (at high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment).As a consequence of Einstein's theory of special relativity, electricity and magnetism are understood to be fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity[6].[10] In particular, a phenomenon that appears purely electric to one observer may be purely magnetic to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism (analogous to how relativity "mixes" space and time into spacetime).

ELECTRIC CURRENT AND CHARGE

The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively-charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons.However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.

An electric arc provides an energetic demonstration of electric current
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires
Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sinusoidal wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.
The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one ampThe capacitor is a device capable of storing charge, and thereby storing electrical energy in the resulting field. Conceptually, it consists of two conducting plates separated by a thin insulating layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.
The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second.The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.

WATER AND ITS ANAMOLOUS EXPANSION PROPERTY

A molecule is an aggregation of atomic nuclei and electrons that is sufficiently stable to possess observable properties— and there are few molecules that are more stable and difficult to decompose than H2O. In water, each hydrogen nucleus is bound to the central oxygen atom by a pair of electrons that are shared between them; chemists call this shared electron pair a covalent chemical bond. In H2O, only two of the six outer-shell electrons of oxygen are used for this purpose, leaving four electrons which are organized into two non-bonding pairs. The four electron pairs surrounding the oxygen tend to arrange themselves as far from each other as possible in order to minimize repulsions between these clouds of negative charge. This would ordinarly result in a tetrahedral geometry in which the angle between electron pairs (and therefore the H-O-H bond angle) is 109.5°. However, because the two non-bonding pairs remain closer to the oxygen atom, these exert a stronger repulsion against the two covalent bonding pairs, effectively pushing the two hydrogen atoms closer together. The result is a distorted tetrahedral arrangement in which the H—O—H angle is 104.5°.
Because molecules are smaller than light waves, they cannot be observed directly, and must be "visualized" by alternative means. This computer-generated image comes from calculations that model the electron distribution in the H2O molecule. The outer envelope shows the effective "surface" of the molecule as defined by the extent of the cloud of negative electric charge created by the ten electrons.
Water has long been known to exhibit many physical properties that distinguish it from other small molecules of comparable mass. Chemists refer to these as the "anomalous" properties of water, but they are by no means mysterious; all are entirely predictable consequences of the way the size and nuclear charge of the oxygen atom conspire to distort the electronic charge clouds of the atoms of other elements when these are chemically bonded to the oxygen.
Water is one of the few known substances whose solid form is less dense than the liquid. The plot at the right shows how the volume of water varies with the temperature; the large increase (about 9%) on freezing shows why ice floats on water and why pipes burst when they freeze. The expansion between –4° and 0° is due to the formation of larger hydrogen-bonded aggregates. Above 4°, thermal expansion sets in as vibrations of the O—H bonds becomes more vigorous, tending to shove the molecules farther apart.
The nature of liquid water and how the H2O molecules within it are organized and interact are questions that have attracted the interest of chemists for many years. There is probably no liquid that has received more intensive study, and there is now a huge literature on this subject.
The following facts are well established:
H2O molecules attract each other through the special type of dipole-dipole interaction known as hydrogen bonding
a hydrogen-bonded cluster in which four H2Os are located at the corners of an imaginary tetrahedron is an especially favorable (low-potential energy) configuration, but...
the molecules undergo rapid thermal motions on a time scale of picoseconds (10–12 second), so the lifetime of any specific clustered configuration will be fleetingly brief.
A variety of techniques including infrared absorption, neutron scattering, and nuclear magnetic resonance have been used to probe the microscopic structure of water. The information garnered from these experiments and from theoretical calculations has led to the development of around twenty "models" that attempt to explain the structure and behavior of water. More recently, computer simulations of various kinds have been employed to explore how well these models are able to predict the observed physical properties of water.

Wednesday, November 18, 2009

THE ATMOSPHERE AND ITS LAYERS

The Earth's atmosphere is a layer of gases surrounding the planet Earth that is retained by Earth's gravity. The atmosphere protects life on Earth by absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night. Dry air contains roughly (by volume) 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.038% carbon dioxide, and trace amounts of other gases. Air also contains a variable amount of water vapor, on average around 1%.
The atmosphere has a mass of about five quintillion (5x1018) kg, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. An altitude of 120 km (75 mi) is where atmospheric effects become noticeable during atmospheric reentry of spacecraft. The Kármán line, at 100 km (62 mi), also is often regarded as the boundary between atmosphere and outer space.
Air is mainly composed of nitrogen, oxygen, and argon, which together constitute the "major gases" of the atmosphere. The remaining gases often are referred to as "trace gases,"among which are the greenhouse gases such as water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Filtered air includes trace amounts of many other chemical compounds. Many natural substances may be present in tiny amounts in an unfiltered air sample, including dust, pollen and spores, sea spray, volcanic ash, and meteoroids. Various industrial pollutants also may be present, such as chlorine (elementary or in compounds), fluorine (in compounds), elemental mercury, and sulfur (in compounds such as sulfur dioxide [SO2]).

Layers of the atmosphere

Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From lowest to highest, these layers are:
1> Troposphere
The troposphere begins at the surface and extends to between 7 km (23,000 ft) at the poles and 17 km (56,000 ft) at the equator, with some variation due to weather. The troposphere is mostly heated by transfer of energy from the surface, so on average the lowest part of the troposphere is warmest and temperature decreases with altitude. This promotes vertical mixing (hence the origin of its name in the Greek word "τροπή", trope, meaning turn or overturn). The troposphere contains roughly 80%[citation needed] of the mass of the atmosphere. The tropopause is the boundary between the troposphere and stratosphere.
2> Stratosphere
The stratosphere extends from the tropopause to about 51 km (32 mi; 170,000 ft). Temperature increases with height, which restricts turbulence and mixing. The stratopause, which is the boundary between the stratosphere and mesosphere, typically is at 50 to 55 km (31 to 34 mi; 160,000 to 180,000 ft). The pressure here is 1/1000th sea level.
3> Mesosphere
The mesosphere extends from the stratopause to 80–85 km (50–53 mi; 260,000–280,000 ft). It is the layer where most meteors burn up upon entering the atmosphere. Temperature decreases with height in the mesosphere. The mesopause, the temperature minimum that marks the top of the mesosphere, is the coldest place on Earth and has an average temperature around −100 °C (−148.0 °F; 173.1 K).
4> Thermosphere
Temperature increases with height in the thermosphere from the mesopause up to the thermopause, then is constant with height. The temperature of this layer can rise to 1,500 °C (2,730 °F), though the gas molecules are so far apart that temperature in the usual sense is not well defined. The International Space Station orbits in this layer, between 320 and 380 km (200 and 240 mi). The top of the thermosphere is the bottom of the exosphere, called the exobase. Its height varies with solar activity and ranges from about 350–800 km (220–500 mi; 1,100,000–2,600,000 ft).
5> Exosphere
The outermost layer of Earth's atmosphere extends from the exobase upward. Here the particles are so far apart that they can travel hundreds of km without colliding with one another. Since the particles rarely collide, the atmosphere no longer behaves like a fluid. These free-moving particles follow ballistic trajectories and may migrate into and out of the magnetosphere or the solar wind. The exosphere is mainly composed of hydrogen and helium.

POLYMERIZATION PROCESS

In polymer chemistry, polymerization is a process of reacting monomer molecules together in a chemical reaction to form three-dimensional networks or polymer chains.There are many forms of polymerization and different systems exist to categorize them.In chemical compounds, polymerization occurs via a variety of reaction mechanisms that vary in complexity due to functional groups present in reacting compounds and their inherent steric effects explained by VSEPR Theory. In more straightforward polymerization, alkenes, which are relatively stable due to σ bonding between carbon atoms form polymers through relatively simple radical reactions; in contrast, more complex reactions such as those that involve substitution at the carbonyl group require more complex synthesis due to the way in which reacting molecules polymerize.
As alkenes can be formed in somewhat straightforward reaction mechanisms, they form useful compounds such as polyethylene and polyvinyl chloride (PVC) when undergoing radical reactions, which are produced in high tonnages each year due to their usefulness in manufacturing processes of commercial products, such as piping, insulation and packaging. Polymers such as PVC are generally referred to as "homopolymers" as they consist of repeated long chains or structures of the same monomer unit, whereas polymers that consist of more than one molecule are referred to as "co-polymers".
Other monomer units, such as formaldehyde hydrates or simple aldehydes, are able to polymerize themselves at quite low temperatures (>-80oC) to form trimers;molecules consisting of 3 monomer units which can cyclize to form ring cyclic structures, or undergo further reactions to form tetramers,or 4 monomer-unit compounds. Further compounds either being referred to as oligomer in smaller molecules. Generally, because formaldehyde is an exceptionally reactive electrophile it allows nucleophillic addition of hemiacetal intermediates, which are generally short lived and relatively unstable "mid stage" compounds which react with other molecules present to form more stable polymeric compounds.
Polymerization that is not sufficiently moderated and proceeds at a fast rate can be very hazardous. This phenomenon is known as Hazardous polymerization and can cause fires and explosions.
Chain-growth polymerization (or addition polymerization) involves the linking together of molecules incorporating double or triple chemical bonds. These unsaturated monomers (the identical molecules that make up the polymers) have extra internal bonds that are able to break and link up with other monomers to form the repeating chain. Chain-growth polymerization is involved in the manufacture of polymers such as polyethylene, polypropylene, and polyvinyl chloride (PVC). A special case of chain-growth polymerization leads to living polymerization.
In the radical polymerization of ethylene, its pi bond is broken, and the two electrons rearrange to create a new propagating center like the one that attacked it. The form this propagating center takes depends on the specific type of addition mechanism. There are several mechanisms through which this can be initiated. The free radical mechanism was one of the first methods to be used. Free radicals are very reactive atoms or molecules that have unpaired electrons. Taking the polymerization of ethylene as an example, the free radical mechanism can be divided in to three stages: chain initiation, chain propagation, and chain termination.
Free radical addition polymerization of ethylene must take place at high temperatures and pressures, approximately 300°C and 2000 atm. While most other free radical polymerizations do not require such extreme temperatures and pressures, they do tend to lack control. One effect of this lack of control is a high degree of branching. Also, as termination occurs randomly, when two chains collide, it is impossible to control the length of individual chains. A newer method of polymerization similar to free radical, but allowing more control involves the Ziegler-Natta catalyst, especially with respect to polymer branching.
Other forms of chain growth polymerization include cationic addition polymerization and anionic addition polymerization. While not used to a large extent in industry yet due to stringent reaction conditions such as lack of water and oxygen, these methods provide ways to polymerize some monomers that cannot be polymerized by free radical methods such as polypropylene. Cationic and anionic mechanisms are also more ideally suited for living polymerizations, although free radical living polymerizations have also been developed.

ABOUT BLACKHOLE

In general relativity, a black hole is a region of space in which the gravity well is so deep that gravitational time dilation halts time completely forming an event horizon, a one-way surface into which objects can fall, but out of which nothing can come. It is called "black" because it absorbs all the light that hits it, reflecting nothing, just like a perfect black-body in thermodynamics.Quantum analysis of black holes shows them to possess a temperature and Hawking radiation.
Despite its invisible interior, a black hole can reveal its presence through interaction with other matter. A black hole can be inferred by tracking the movement of a group of stars that orbit a region in space which looks empty. Alternatively, one can see gas falling into a relatively small black hole, from a companion star. This gas spirals inward, heating up to very high temperatures and emitting large amounts of radiation that can be detected from earthbound and earth-orbiting telescopes. Such observations have resulted in the scientific consensus that, barring a breakdown in our understanding of nature, black holes exist in our universe.
The No hair theorem states that, once it settles down, a black hole has only three independent physical properties: mass, charge, and angular momentum.Any two black holes that share the same values for these properties, or parameters, are classically indistinguishable.
These properties are special because they are visible from outside the black hole. For example, a charged black hole repels other like charges just like any other charged object, despite the fact that photons, the particles responsible for electric and magnetic forces, cannot escape from the interior region. The reason is Gauss's law, the total electric flux going out of a big sphere always stays the same, and measures the total charge inside the sphere. When charge falls into a black hole, electric field lines still remain, poking out of the horizon, and these field lines conserve the total charge of all the infalling matter. The electric field lines eventually spread out evenly over the surface of the black hole, settling down to a uniform field-line density on the surface. The black hole acts in this regard like a classical conducting sphere with a definite resistivity.
Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss's law, far away from the black hole. Likewise, the angular momentum can be measured from far away using frame dragging by the gravitomagnetic field.
When a black hole swallows any form of matter, its horizon oscillates like a stretchy membrane with friction, a dissipative system, until it settles down to a simple final state. This is different from other field theories like electromagnetism or gauge theory, which never have any friction or resistivity, because they are time reversible. Because the black hole eventually settles down into a final state with only three parameters, there is no way to avoid losing information about the initial conditions: The gravitational and electric fields of the black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including the total baryon number, lepton number, and all the other nearly conserved pseudo-charges of particle physics. This behavior is so puzzling, that it has been called the black hole information loss paradox.
The loss of information in black holes is puzzling even classically, because general relativity is a Lagrangian theory, which superficially appears to be time reversible and Hamiltonian. But because of the horizon, a black hole is not time reversible: matter can enter but it cannot escape. The time reverse of a classical black hole has been called a white hole, although entropy considerations and quantum mechanics suggest that white holes are just the same as black holes.
The no-hair theorem makes some assumptions about the nature of our universe and the matter it contains, and other assumptions lead to different conclusions. For example, if Magnetic monopoles exist, as predicted by some theories, the magnetic charge would be a fourth parameter for a classical black hole.

Tuesday, November 17, 2009

ABOUT GLOBAL WARMING

Global warming is the increase in the average temperature of the Earth's near-surface air and oceans since the mid-20th century and its projected continuation. Global surface temperature increased 0.74 ± 0.18 °C (1.33 ± 0.32 °F) during the last century. The Intergovernmental Panel on Climate Change (IPCC) concludes that most of the observed temperature increase since the middle of the 20th century was caused by increasing concentrations of greenhouse gases resulting from human activity such as fossil fuel burning and deforestation.The IPCC also concludes that variations in natural phenomena such as solar radiation and volcanoes produced most of the warming from pre-industrial times to 1950 and had a small cooling effect afterward.These basic conclusions have been endorsed by more than 40 scientific societies and academies of science, including all of the national academies of science of the major industrialized countries.
Climate model projections summarized in the latest IPCC report indicate that the global surface temperature will probably rise a further 1.1 to 6.4 °C (2.0 to 11.5 °F) during the twenty-first century. The uncertainty in this estimate arises from the use of models with differing sensitivity to greenhouse gas concentrations and the use of differing estimates of future greenhouse gas emissions. Some other uncertainties include how warming and related changes will vary from region to region around the globe. Most studies focus on the period up to the year 2100. However, warming is expected to continue beyond 2100 even if emissions stop, because of the large heat capacity of the oceans and the long lifetime of carbon dioxide in the atmosphere.
An increase in global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, probably including expansion of subtropical deserts. The continuing retreat of glaciers, permafrost and sea ice is expected, with warming being strongest in the Arctic. Other likely effects include increases in the intensity of extreme weather events, species extinctions, and changes in agricultural yields.
Political and public debate continues regarding climate change, and what actions (if any) to take in response. The available options are mitigation to reduce further emissions; adaptation to reduce the damage caused by warming; and, more speculatively, geoengineering to reverse global warming. Most national governments have signed and ratified the Kyoto Protocol aimed at reducing greenhouse gas emissions.
It usually is impossible to connect specific weather events to global warming. Instead, global warming is expected to cause changes in the overall distribution and intensity of events, such as changes to the frequency and intensity of heavy precipitation. Broader effects are expected to include glacial retreat, Arctic shrinkage, and worldwide sea level rise. Some effects on both the natural environment and human life are, at least in part, already being attributed to global warming. A 2001 report by the IPCC suggests that glacier retreat, ice shelf disruption such as that of the Larsen Ice Shelf, sea level rise, changes in rainfall patterns, and increased intensity and frequency of extreme weather events are attributable in part to global warming.Other expected effects include water scarcity in some regions and increased precipitation in others, changes in mountain snowpack, and some adverse health effects from warmer temperatures.
Social and economic effects of global warming may be exacerbated by growing population densities in affected areas. Temperate regions are projected to experience some benefits, such as fewer cold-related deaths.A summary of probable effects and recent understanding can be found in the report made for the IPCC Third Assessment Report by Working Group II. The newer IPCC Fourth Assessment Report summary reports that there is observational evidence for an increase in intense tropical cyclone activity in the North Atlantic Ocean since about 1970, in correlation with the increase in sea surface temperature (see Atlantic Multidecadal Oscillation), but that the detection of long-term trends is complicated by the quality of records prior to routine satellite observations. The summary also states that there is no clear trend in the annual worldwide number of tropical cyclones.

ABOUT AN ARCHAEOLOGY

Archaeology (sometimes written archæology) or archeology is the science that studies human cultures through the recovery, documentation, analysis, and interpretation of material culture and environmental data, including architecture, artifacts, biofacts, and landscapes. Archaeology aims to understand humankind through these humanistic endeavors. In the United States it is commonly considered to be a subset of anthropology, along with physical anthropology, cultural anthropology, and linguistic anthropology,whereas in British and European universities, archaeology is considered as a separate discipline entirely.
Methodology, theory, and philosophy centralize archaeological (and anthropological) debate. Research, survey, excavation, analysis, and preservation are the tools of archaeological processes.
'Archaeological goals' are debatable. Some goals include the documentation and explanation of the origins and development of human cultures, understanding culture history, chronicling cultural evolution, and studying human behavior and ecology, for both prehistoric and historic societies.
In broad scope, archaeology relies on cross-disciplinary research. It draws upon anthropology, history, art history, classics, ethnology, geography,geology, linguistics, physics, information sciences, chemistry, statistics, paleoecology, paleontology, paleozoology, paleoethnobotany, and paleobotany.
Often archaeology provides the only means to learn of the existence and behaviors of people of the past. Across the millennia many thousands of cultures and societies and billions of people have come and gone of which there is little or no written record or existing records are misrepresentative or incomplete. Writing as it is known today did not exist in human civilization until the 4th millennium BC, in a relatively small number of technologically advanced civilizations. In contrast Homo sapiens has existed for at least 200,000 years, and other species of Homo for millions of years (see Human evolution). These civilizations are, not coincidentally, the best-known; they have been open to the inquiry of historians for centuries, while the study of prehistoric cultures has arisen only recently. Even within a literate civilization many events and important human practices are not officially recorded. Any knowledge of the early years of human civilization – the development of agriculture, cult practices of folk religion, the rise of the first cities – must come from archaeology.

Ten Indus glyphs discovered near the northern gate of Dholavira (ca. 2500-1900 years old)
Even where written records do exist, they are often incomplete and invariably biased to some extent. In many societies, literacy was restricted to the elite classes, such as the clergy or the bureaucracy of court or temple. The literacy even of aristocrats has sometimes been restricted to deeds and contracts. The interests and world-view of elites are often quite different from the lives and interests of the populace. Writings that were produced by people more representative of the general population were unlikely to find their way into libraries and be preserved there for posterity. Thus, written records tend to reflect the biases, assumptions, cultural values and possibly deceptions of a limited range of individuals, usually a small fraction of the larger population. Hence, written records cannot be trusted as a sole source. The material record is closer to a fair representation of society, though it is subject to its own inaccuracies, such as sampling bias and differential preservation.
In addition to their scientific importance, archaeological remains sometimes have political or cultural significance to descendants of the people who produced them, monetary value to collectors, or simply strong aesthetic appeal. Many people identify archaeology with the recovery of such aesthetic, religious, political, or economic treasures rather than with the reconstruction of past societies.
Archaeometry is a field of study that aims to systematize archaeological measurement. It emphasizes the application of analytical techniques from physics, chemistry, and engineering. It is a lively field of research that frequently focuses on the definition of the chemical composition of archaeological remains for source analysis.A relatively nascent subfield is that of archaeological materials, designed to enhance understanding of prehistoric and non-industrial culture through scientific analysis of the structure and properties of materials associated with human activity.

RADIUM AND RADIOACTIVITIES

The discovery of the phenomena of radioactivity adds a new group to the great number of invisible radiations now known, and once more we are forced to recognize how limited is our direct perception of the world which surrounds us, and how numerous and varied may be the phenomena which we pass without a suspicion of their existence until the day when a fortunate hazard reveals them.
The radiations longest known to us are those capable of acting directly upon our senses; such are the rays of sound and light. But it has also long been recognized that, besides light itself, warm bodies emit rays in every respect analogous to luminous rays, though they do not possess the power of directly impressing our retina. Among such radiations, some, the infra-red, announce themselves to us by producing a measurable rise of temperature in the bodies which receive them, while others, the ultra-violet, act with specially great intensity upon photographic plates. We have here a first example of rays only indirectly accessible to us.
Yet further surprises in this domain of invisible radiations were reserved for us. The researches of two great physicists, Maxwell and Hertz, showed that electric and magnetic effects are propagated in the same manner as light, and that there exist “electromagnetic radiations,” similar to luminous radiations, which are to the infra-red rays what these latter are to light. These are the electromagnetic radiations which are used for the transmission of messages in wireless telegraphy. They are present in the space around us whenever an electric phenomenon is produced, especially a lightning discharge. Their presence may be established by the use of special apparatus, and here again the testimony of our senses appears only in an indirect manner. If we consider these radiations in their entirety - the ultra-violet, the luminous, the infra-red, and the electromagnetic - we find that the radiations we see constitute but an insignificant fraction of those that exist in space. But it is human nature to believe that the phenomena we know are the only ones that exist, and whenever some chance discovery extends the limits of our knowledge we are filled with amazement. We cannot become accustomed to the idea that we live in a world that is revealed to us only in a restricted portion of its manifestations.
Among recent scientific achievements which have attracted most attention must be placed the discovery of cathode rays, and in even greater measure that of Roentgen rays. These rays are produced in vacuum-tubes when an electric discharge is passed through the rarefied gas. The prevalent opinion among physicists is that cathode rays are formed by extremely small material particles, charged with negative electricity, and thrown off with great velocity from the cathode, or negative electrode, of the tube. When the cathode rays meet the glass wall of the tube they render it vividly fluorescent. These rays can be deflected from their straight path by the action of a magnet. Whenever they encounter a solid obstacle, the emission of Roentgen rays is the result. These latter can traverse the glass and propagate themselves through the outside air. They differ from cathode rays in that they carry no electric charge and are not deflected from their course by the action of a magnet. Everyone knows the effect of Roentgen rays upon photographic plates and upon fluorescent screens, the radiographs obtainable from them, and their application in surgery.
The discovery of Becquerel rays dates from a few years after that of Roentgen rays. At first they were much less noticed. The world, attracted by the sensational discovery of Roentgen rays, was less inclined to astonishment. On all sides a search was instituted by similar processes for new rays, and announcements of phenomena were made that have not always been confirmed. It has been only gradually that the positive existence of a new radiation has been established. The merit of this discovery belongs to M. Becquerel, who succeeded in demonstrating that uranium and its compounds spontaneously emit rays that are able to traverse opaque bodies and to affect photographic plates.
It was at the close of the year 1897 that I began to study the compounds of uranium, the properties of which had greatly attracted my interest. Here was a substance emitting spontaneously and continuously radiations similar to Roentgen rays, whereas ordinarily Roentgen rays can be produced only in a vacuum-tube with the expenditure of energy. By what process can uranium furnish the same rays without expenditure of energy and without undergoing apparent modification? Is uranium the only body whose compounds emit similar rays? Such were the questions I asked myself, and it was while seeking to answer them that I entered into the researches which have led to the discovery of radium.
First of all, I studied the radiation of the compounds of uranium. Instead of making these bodies act upon photographic plates, I preferred to determine the intensity of their radiation by measuring the conductivity of the air exposed to the action of the rays. To make this measurement, one can determine the speed with which the rays discharge an electroscope, and thus obtain data for a comparison. I found in this way that the radiation of uranium is very constant, varying neither with the temperature nor with the illumination. I likewise observed that all the compounds of uranium are active, and that they are more active the greater the proportion of this metal which they contain. Thus I reached the conviction that the emission of rays by the compounds of uranium is a property of the metal itself—that it is an atomic property of the element uranium independent of its chemical or physical state. I then began to investigate the different known chemical elements, to determine whether there exist others, besides uranium, that are endowed with atomic radioactivity—that is to say, all the compounds of which emit Becquerel rays. It was easy for me to procure samples of all the ordinary substances—the common metals and metalloids, oxides and salts. But as I desired to make a very thorough investigation, I had recourse to different chemists, who put at my disposal specimens—in some cases the only ones in existence—containing very rare elements. I thus was enabled to pass in review all the chemical elements and to examine them in the state of one or more of their compounds. I found but one element undoubtedly possessing atomic radioactivity in measurable degree. This element is thorium. All the compounds of thorium are radioactive, and with about the same intensity as the similar compounds of uranium. As to the other substances, they showed no appreciable radioactivity under the conditions of the test.
I likewise examined certain minerals. I found, as I expected, that the minerals of uranium and thorium are radioactive; but to my great astonishment I discovered that some are much more active than the oxides of uranium and of thorium which they contain. Thus a specimen of pitch-blende (oxide of uranium ore) was found to be four times more active than oxide of uranium itself. This observation astonished me greatly. What explanation could there be for it? How could an ore, containing many substances which I had proved inactive, be more active than the active substances of which it was formed? The answer came to me immediately: The ore must contain a substance more radioactive than uranium and thorium, and this substance must necessarily be a chemical element as yet unknown; moreover, it can exist in the pitch-blende only in small quantities, else it would not have escaped the many analyses of this ore; but, on the other hand, it must possess intense radioactivity, since, although present in small amount, it produces such remarkable effects. I tried to verify my hypothesis by treating pitch-blende by the ordinary processes of chemical analysis, thinking it probable that the new substance would be concentrated in passing through certain stages of the process. I performed several experiments of this nature, and found that the ore could in fact be separated into portions some of which were much more radioactive than others.
To try to isolate the supposed new element was a great temptation. I did not know whether this undertaking would be difficult. Of the new element I knew nothing except that it was radioactive. What were its chemical properties? In what quantity did it appear in pitch-blende? I began the analysis of pitch-blende by separating it into its constituent elements, which are very numerous. This task I undertook in conjunction with M. Curie. We expected that perhaps a few weeks would suffice to solve the problem. We did not suspect that we had begun a work which was to occupy years and which was brought to a successful issue only after considerable expenditure.
We readily proved that pitch-blende contains very radioactive substances, and that there were at least three. That which accompanies the bismuth extracted from pitch-blende we named Polonium; that which accompanies barium from the same source we named Radium; finally, M. Debierne gave the name of Actinium to a substance which is found in the rare earths obtained from the same ore.
Radium was to us from the beginning of our work a source of much satisfaction. Demarçay, who examined the spectrum of our radioactive barium, found in it new rays and confirmed us in our belief that we had indeed discovered a new element.
The question now was to separate the polonium from the bismuth, the radium from the barium. This is the task that has occupied us for years, and as yet we have succeeded only in the case of radium. The research has been a most difficult one. We found that by crystallizing out the chloride of radioactive barium from a solution we obtained crystals that were more radioactive, and consequently richer in radium, than the chloride that remained dissolved. It was only necessary to make repeated crystallizations to obtain finally a pure chloride of radium.
But although we treated as much as fifty kilograms of primary substance, and crystallized the chloride of radiferous barium thus obtained until the activity was concentrated in a few minute crystals, these crystals still contained chiefly chloride of barium; as yet radium was present only in traces, and we saw that we could not finish our experiments with the means at hand in our laboratory. At the same time the desire to succeed grew stronger; for it became evident that radium must possess most intense radioactivity, and that the isolation of this body was therefore an object of the highest interest.
Fortunately for us, the curious properties of these radium-bearing compounds had already attracted general attention and we were assisted in our search.
A chemical factory in Paris consented to undertake the extraction of radium on a large scale. We also received certain pecuniary assistance, which allowed us to treat a large quantity of ore. The most important of these grants was one of twenty thousand francs, for which we are indebted to the Institute of France.
We were thus enabled to treat successively about seven tons of a primary substance which was the residue of pitch-blende after the extraction of uranium. Today we know that a ton of this residue contains from two to three decigrams (from four to seven ten-thousandths of a pound) of radium. During this treatment, and as soon as I had in my possession a decigram of chloride of radium recognized as pure by the spectroscope, I determined the atomic weight of this new element, finding it to be 225, while that of barium is 137.
The properties of radium are extremely curious. This body emits with great intensity all of the different rays that are produced in a vacuum-tube. The radiation, measured by means of an electroscope, is at least a million times more powerful than that from an equal quantity of uranium. A charged electroscope placed at a distance of several meters can be discharged by a few centigrams of a radium salt. One can also discharge an electroscope through a screen of glass or lead five or six centimeters thick. Photographic plates placed in the vicinity of radium are also instantly affected if no screen intercepts the rays; with screens, the action is slower, but it still takes place through very thick ones if the exposure is sufficiently long. Radium can therefore be used in the production of radiographs.
The compounds of radium are spontaneously luminous. The chloride and bromide, freshly prepared and free from water, emit a light which resembles that of a glow-worm. This light diminishes rapidly in moist air; if the salt is in a sealed tube, it diminishes slowly by reason of the transformation of the white salt, which becomes colored, but the light never completely disappears. By redissolving the salt and drying it anew, its original luminosity is restored.
A glass vessel containing radium spontaneously charges itself with electricity. If the glass has a weak spot,—for example, if it is scratched by a file,—an electric spark is produced at that point, the vessel crumbles like a Leiden jar when overcharged, and the electric shock of the rupture is felt by the fingers holding the glass.
Radium possesses the remarkable property of liberating heat spontaneously and continuously. A solid salt of radium develops a quantity of heat such that for each gram of radium contained in the salt there is an emission of one hundred calories per hour. Expressed differently, radium can melt in an hour its weight in ice. When we reflect that radium acts in this manner continuously, we are amazed at the amount of heat produced, for it can be explained by no known chemical reaction.The radium remains apparently unchanged. If, then, we assume that it undergoes a transformation, we must therefore conclude that the change is extremely slow; in an hour it is impossible to detect a change by any known methods.
As a result of its emission of heat, radium always possesses a higher temperature than its surroundings. This fact may be established by means of a thermometer, if care is taken to prevent the radium from losing heat.
Radium has the power of communicating its radioactivity to surrounding bodies. This is a property possessed by solutions of radium salts even more than by the solid salts. When a solution of a radium salt is placed in a closed vessel, the radioactivity in part leaves the solution and distributes itself through the vessel, the walls of which become radioactive and luminous. The radiation is therefore in part exteriorized. We may assume, with Mr. Rutherford, that radium emits a radioactive gas and that this spreads through the surrounding air and over the surface of neighboring objects. This gas has received the name emanation. It differs from ordinary gas in the fact that it gradually disappears. [The modern name for this element is radon.]
Certain bodies—bismuth, for instance—may also be rendered active by keeping them in solution with the salts of radium. These bodies then become atomically active, and keep this radioactivity even after chemical transformations. Little by little, however, they lose it, while the activity of radium persists.
The nature of radium radiations is very complex. They may be divided into three distinct groups, according to their properties. One group is composed of radiations absolutely analogous to cathode rays, composed of material particles called electrons, much smaller than atoms, negatively charged, and projected from the radium with great velocity—a velocity which for some of these rays is very little inferior to that of light.
The second group is composed of radiations which are believed to be formed by material particles the mass of which is comparable to that of atoms, charged with positive electricity, and set in motion by radium with a great velocity, but one that is inferior to that of the electrons. Being larger than electrons and possessing at the same time a smaller velocity, these particles have more difficulty in traversing obstacles and form rays that are less penetrating.
Finally, the radiations of the third group are analogous to Roentgen rays and do not behave like projectiles.
The radiations of the first group are easily deflected by a magnet; those of the second group, less easily and in the opposite direction; those of the third group are not deflected. From its power of emitting these three kinds of rays, radium may be likened to a complete little Crookes tube acting spontaneously.
Radium is a body which gives out energy continuously and spontaneously. This liberation of energy is manifested in the different effects of its radiation and emanation, and especially in the development of heat. Now, according to the most fundamental principles of modern science, the universe contains a certain definite provision of energy, which can appear under various forms, but cannot be increased.
Without renouncing this conception, we cannot believe that radium creates the energy which it emits; but it can either absorb energy continuously from without, or possess in itself a reserve of energy sufficient to act during a period of years without visible modification. The first theory we may develop by supposing that space is traversed by radiations that are as yet unknown to us, and that radium is able to absorb these radiations and transform their energy into the energy of radioactivity. Thus in a vacuum-tube the electric energy is utilized to produce cathode rays, and the energy of the latter is partly transformed, by the bodies which absorb them into the energy of Roentgen rays. It is true that we have no proof of the existence of radiations which produce radioactivity; but, as indicated at the beginning of this article, there is nothing improbable in supposing that such radiations exist about us without our suspecting it.
If we assume that radium contains a supply of energy which it gives out little by little, we are led to believe that this body does not remain unchanged, as it appears to, but that it undergoes an extremely slow change. Several reasons speak in favor of this view. First, the emission of heat, which makes it seem probable that a chemical reaction is taking place in the radium. But this can be no ordinary chemical reaction, affecting the combination of atoms in the molecule. No chemical reaction can explain the emission of heat due to radium. Furthermore, radioactivity is a property of the atom of radium; if, then, it is due to a transformation this transformation must take place in the atom itself. Consequently, from this point of view, the atom of radium would be in a process of evolution, and we should be forced to abandon the theory of the invariability of atoms, which is at the foundation of modern chemistry.
Moreover, we have seen that radium acts as though it shot out into space a shower of projectiles, some of which have the dimensions of atoms, while others can only be very small fractions of atoms. If this image corresponds to a reality, it follows necessarily that the atom of radium breaks up into subatoms of different sizes, unless these projectiles come from the atoms of the surrounding gas, disintegrated by the action of radium; but this view would likewise lead us to believe that the stability of atoms is not absolute.
Radium emits continuously a radioactive emanation which, from many points of view, possesses the properties of a gas. Mr. Rutherford considers the emanation as one of the results of the disintegration of the atom of radium, and believes it to be an unstable gas which is itself slowly decomposed.
Professor Ramsay has announced that radium emits helium gas continuously. If this very important fact is confirmed, it will show that a transformation is occurring either in the atom of radium or in the neighboring atoms, and a proof will exist that the transmutation of the elements is possible. [In fact radium does emit helium, as alpha particles.]
When a body that has remained in solution with radium becomes radioactive, the chemical properties of this body are modified, and here again it seems as though we have to deal with a modification of the atom. It would be very interesting to see whether, by thus giving radioactivity to bodies, we can succeed in causing an appreciable change in their atoms. We should thus have a means of producing certain transformations of elements at will. [These observations were misleading. True artificial radioactivity was not produced until the work of Irène and Frédéric Joliot-Curie in 1934.]
It is seen that the study of the properties of radium is of great interest. This is true also of the other strongly radioactive substances, polonium and actinium, which are less known because their preparation is still more difficult. All are found in the ores of uranium and thorium, and this fact is certainly not the result of chance, but must have some connection with the manner of formation of these elements. Polonium, when it has just been extracted from pitch-blende, is as active as radium, but its radioactivity slowly disappears; actinium has a persistent activity. These two bodies differ from radium in many ways; their study should therefore be fertile in new results. Actinium lends itself readily to the study of the emanation and of the radioactivity produced in inactive bodies, since it gives out emanation in great quantity. It would also be interesting, from the chemical point of view, to prove that polonium and actinium contain new elements. Finally, one might seek out still other strongly radioactive substances and study them.
But all these investigations are exceedingly difficult because of the obstacles encountered in the preparation of strongly radioactive substances. At the present time we possess only about a gram of pure salts of radium. Research in all branches of experimental science—physics, chemistry, physiology, medicine—is impeded, and a whole evolution in science is retarded, by the lack of this precious and unique material, which can now be obtained only at great expense. We must now look to individual initiative to come to the aid of science, as it has so often done in the past, and to facilitate and expedite by generous gifts the success of researches the influence of which may be far-reaching.

Saturday, November 14, 2009

COMPUTER SCIENCE AND TECHNOLOGY

Computer science (or computing science) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems. It is frequently described as the systematic study of algorithmic processes that create, describe and transform information. According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?" Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.
The general public sometimes confuses computer science with vocational areas that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. The Computer Sciences Accreditation Board (CSAB) – which is made up of representatives of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers Computer Society, and the Association for Information Systems – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study.
Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The practical aspects of computer programming are often referred to as software engineering. However, there is a lot of disagreement over the meaning of the term, and whether or not it is the same thing as programming.

Friday, November 13, 2009

THOMAS ALVA EDISION

Thomas Alva Edison was born on Feb 11, 1847 in Milan, Ohio. His parents, Samuel Ogden Edison, Jr. and Nancy Matthews Elliott, had seven children, Thomas was the youngest. Three of his siblings died in childhood.Thomas Alva Edison was one of the greatest inventors in history. Some of his best known inventions are the light bulb, the mimeograph and phonograph. His most important invention was the central power station that provided electricity to multiple users, and became the foundation of his company, now known as General Electric. He patented over 1,000 inventions in his lifetime.
Model of Pearl Street generating plant Edison's development of central-station lighting systems for cities was one of the great achievements in world history because it brought electricity out of the laboratory into actual commercial use. His Pearl Street (New York City) electricity generating station introduced four key elements of a modern electric utility system: reliable central generation, efficient distribution, a successful end use (in 1882, the light bulb), and a competitive price. It was a model of efficiency for its time. At first it served 59 customers for about 24 cents per kilowatt hour. By the late 1880s, power demand for electric motors (especially for elevators and streetcars) brought the industry from mainly nighttime lighting to 24-hour service and dramatically raised electricity demand for transportation and industry needs. By the end of the 1880s, small central stations dotted many U.S. cities; each was limited to a few blocks area because of transmission inefficiencies of direct current (DC). Edison fought a mighty battle with George Westinghouse, who developed a competing system based on alternating current (AC).

ALBERT EINSTEIN AND HIS THEORIES

Albert Einstein (Ulm, March 14, 1879 – Princeton, April 18, 1955) was a German-born theoretical physicist. Popularly regarded as the most important scientist of the 20th century, he largely formulated the special and general theories of relativity, and made important contributions to both quantum theory and statistical mechanics. While best known for the Theory of Relativity (and specifically mass-energy equivalence, E = mc²), he was awarded the 1921 Nobel Prize for Physics for his explanation of the photoelectric effect in 1905 (his "wonderful year" or "miraculous year") and "for his services to Theoretical Physics".
His world fame in physics began in the early 20th century when indisputable evidence supporting his ideas became available. Analysis of the May-1919 British solar-eclipse expeditions confirmed that light rays from distant stars were deflected by the Sun's gravitation, just as predicted by the field equations of general relativity. Once the results of the analysis were accepted, The London Times ran the headline (on November 7, 1919): "Revolution in science – New theory of the Universe – Newtonian ideas overthrown" and Albert Einstein became a global celebrity, an unusual designation for a scientist. In popular culture, the name "Einstein" has become synonymous with great intelligence and genius.
Einstein could not find a teaching post upon graduation, mostly because his brashness as a young man had apparently irritated most of his professors. The father of a classmate helped him obtain employment as a technical assistant examiner at the Swiss Patent Office in 1902. There, Einstein judged the worth of inventors' patent applications for devices that required a knowledge of physics to understand — in particular he was chiefly charged to evaluate patents relating to electromagnetic devices. He also learned how to discern the essence of applications despite sometimes poor descriptions, and was taught by the director how "to express [him]self correctly". He occasionally rectified their design errors while evaluating the practicality of their work.
Einstein married Mileva Marić on January 6, 1903. Einstein's marriage to Marić, who was a mathematician, was both a personal and intellectual partnership: Einstein referred to Mileva as "a creature who is my equal and who is as strong and independent as I am". Ronald W. Clark, a biographer of Einstein, claimed that Einstein depended on the distance that existed in his marriage to Mileva in order to have the solitude necessary to accomplish his work; he required intellectual isolation. Abram Joffe, a Soviet physicist who knew Einstein, wrote in an obituary of him, "The author of [the papers of 1905] was… a bureaucrat at the Patent Office in Bern, Einstein-Marić" and this has recently been taken as evidence of a collaborative relationship. However, according to Alberto A. Martínez of the Center for Einstein Studies at Boston University, Joffe only ascribed authorship to Einstein, as he believed that it was a Swiss custom at the time to append the spouse's last name to the husband's name.Whatever the truth, the extent of her influence on Einstein's work is a highly controversial and debated question.
In 1903, Einstein's position at the Swiss Patent Office had been made permanent, though he was passed over for promotion until he had "fully mastered machine technology". He obtained his doctorate under Alfred Kleiner at the University of Zurich after submitting his thesis "A new determination of molecular dimensions" ("Eine neue Bestimmung der Moleküldimensionen") in 1905.

SIR ISAAC NEWTON AND HIS THEORIES

Sir Isaac Newton FRS (4 January 1643 – 31 March 1727 [OS: 25 December 1642 – 20 March 1727])was an English physicist, mathematician, astronomer, natural philosopher, alchemist, and theologian who is perceived and considered by a substantial number of scholars and the general public as one of the most influential men in history. His 1687 publication of the Philosophiæ Naturalis Principia Mathematica (usually called the Principia) is considered to be among the most influential books in the history of science, laying the groundwork for most of classical mechanics. In this work, Newton described universal gravitation and the three laws of motion which dominated the scientific view of the physical universe for the next three centuries. Newton showed that the motions of objects on Earth and of celestial bodies are governed by the same set of natural laws by demonstrating the consistency between Kepler's laws of planetary motion and his theory of gravitation, thus removing the last doubts about heliocentrism and advancing the scientific revolution.
In mechanics, Newton enunciated the conservation principles of momentum and angular momentum. In optics, he built the first practical reflecting telescope and developed a theory of colour based on the observation that a prism decomposes white light into the many colours that form the visible spectrum. He also formulated an empirical law of cooling and studied the speed of sound.
In mathematics, Newton shares the credit with Gottfried Leibniz for the development of the differential and integral calculus. He also demonstrated the generalised binomial theorem, developed the so-called "Newton's method" for approximating the zeroes of a function, and contributed to the study of power series.
Newton remains influential to scientists, as demonstrated by a 2005 survey of scientists and the general public in Britain's Royal Society asking who had the greater effect on the history of science, Newton or Albert Einstein. Newton was deemed to have made the greater overall contribution to science, although the two men were closer when it came to contributions to humanity.Newton was also highly religious, though an unorthodox Christian, writing more on Biblical hermeneutics than the natural science he is remembered for today.

Later in life, as a holder of the Cambridge Lucasian Chair of Mathematics, Newton worked out his initial ideas into a set of mechanical laws, with his second and most important law: Force is mass times acceleration (). Newton was the first to understand the concept of inertial forces, notably the centrifugal force, although Christian Huyghens was close to understanding this effect. In 1684 Newton proved that Kepler's laws follow from his own second law in conjunction with his gravitational law. This proof completed the astronomical revolution initiated by Copernicus.

CARBON AND ITS COMPOUNDS

Carbon is the sixth most abundant element in the universe and is unique due to its dominant role in the chemistry of life and in the human economy. It is a nonmetallic element having the symbol C, the atomic number 6, an atomic weight of 12.01115, and a melting point about 360ºC. There are four known allotropes of carbon: amorphous, graphite, diamond, and fullerene. A new fifth allotrope of carbon was recently produced, a spongy solid called a magnetic carbon “nanofoam” that is extremely lightweight and attracted to magnets. The name derives from the Latin carbo, for "charcoal". It was known in prehistoric times in the form of charcoal and soot. In 1797, the English chemist Smithson Tennant proved that diamond is pure carbon. It is found in abundance in the sun, stars, comets, and atmospheres of most planets. Carbon in the form of microscopic diamonds is found in some meteorites.
Natural diamonds are found in kimberlite of ancient volcanic "pipes," found in South Africa, Arkansas, and elsewhere. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. About 30% of all industrial diamonds used in the U.S. are now made synthetically.
The energy of the sun and stars can be attributed at least in part to the well-known carbon-nitrogen cycle.Due to carbon’s unusual chemical property of being able to bond with itself and a wide variety of other elements, it forms nearly 10 million known compounds. Carbon is present as carbon dioxide in the atmosphere and dissolved in all natural waters. It is a component of rocks as carbonates of calcium (limestone), magnesium, and iron.
The fossil fuels (coal, crude oil, natural gas, oils sands, and shale oils) are chiefly hydrocarbons. Carbon is the active element of photosynthesis and the key structural component of all living matter. The isotope carobon-12 is used as the basis for atomic weights. Carbon-14, a radioactive isotope with a half-life of 5,730 years, is used to date such materials as wood and archeological specimens. In 1960, W.F. Libby was awarded the Nobel Prize in Chemistry for developing the carbon dating method.
Organic chemistry, a major subfield of chemistry, is the study of carbon and its compounds. Because carbon dioxide is a principal greenhouse gas, the global carbon cycle has become a focus of scientific inquiry in relation to global warming, and the management of carbon dioxide emissions from the combustion of fossil fuels is a central technological, economic, and political concern. In combination, carbon is found as carbon dioxide (CO2) in the atmosphere of the Earth and dissolved in all natural waters. It is a component of great rock masses in the form of carbonates of calcium (limestone), magnesium, and iron. Coal, petroleum, and natural gas are chiefly hydrocarbons.
Carbon is unique among the elements in the vast number and variety of compounds it can form. With hydrogen, oxygen, nitrogen, and other elements, it forms a very large number of compounds, carbon atom often being linked to carbon atom. There are close to ten million known carbon compounds, many thousands of which are vital to organic and life processes.
Without carbon, the basis for life would be impossible. While it has been thought that silicon might take the place of carbon in forming a host of similar compounds, it is now not possible to form stable compounds with very long chains of silicon atoms. The atmosphere of Mars contains 96.2% CO2. Some of the most important compounds of carbon are carbon dioxide (CO2), carbon monoxide (CO), carbon disulfide (CS2), chloroform (CHCl3), carbon tetrachloride (CCl4), methane (CH4), ethylene (C2H4), acetylene (C2H2), benzene (C6H6), acetic acid (CH3COOH), and their derivatives. Carbon has many isotopes, but just three are stable enough to exist in detectable amounts in nature. Carbon-12, a stable (non-radioactive) isotope, comprises nearly 99% of all carbon on Earth. In 1961 the International Union of Pure and Applied Chemistry adopted the isotope carbon-12 as the basis for atomic weights. Carbon-13, also a stable isotope, is the next most abundant, comprising slightly more than 1% of all carbon on Earth. Carbon-14 is the most abundant radioactive isotope of carbon at 1 part per trillion. It has a half life of 5730 years and has been widely used to date such materials as wood, archaeological specimens, etc, through radiocarbon dating. All other isotopes of carbon are highly unstable and extremely rare.

Thursday, November 12, 2009

ELACTRIC CURRENT

The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively-charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.

An electric arc provides an energetic demonstration of electric current
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840. One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass.He had discovered electromagnetism, a fundamental interaction between electricity and magnetics.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sinusoidal wave.Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.