Saturday, November 21, 2009

WIND ENERGY

The Earth is unevenly heated by the sun, such that the poles receive less energy from the sun than the equator; along with this, dry land heats up (and cools down) more quickly than the seas do. The differential heating drives a global atmospheric convection system reaching from the Earth's surface to the stratosphere which acts as a virtual ceiling. Most of the energy stored in these wind movements can be found at high altitudes where continuous wind speeds of over 160 km/h (99 mph) occur. Eventually, the wind energy is converted through friction into diffuse heat throughout the Earth's surface and the atmosphere.
The total amount of economically extractable power available from the wind is considerably more than present human power use from all sources.An estimated 72 TW of wind power on the Earth potentially can be commercially viable,compared to about 15 TW average global power consumption from all sources in 2005. Not all the energy of the wind flowing past a given point can be recovered (see Betz' law).
The strength of wind varies, and an average value for a given location does not alone indicate the amount of energy a wind turbine could produce there. To assess the frequency of wind speeds at a particular location, a probability distribution function is often fit to the observed data. Different locations will have different wind speed distributions. The Weibull model closely mirrors the actual distribution of hourly wind speeds at many locations. The Weibull factor is often close to 2 and therefore a Rayleigh distribution can be used as a less accurate, but simpler model.
Because so much power is generated by higher wind speed, much of the energy comes in short bursts. The 2002 Lee Ranch sample is telling; half of the energy available arrived in just 15% of the operating time. The consequence is that wind energy from a particular turbine or wind farm does not have as consistent an output as fuel-fired power plants; utilities that use wind power provide power from starting existing generation for times when the wind is weak thus wind power is primarily a fuel saver rather than a capacity saver. Making wind power more consistent requires that various existing technologies and methods be extended, in particular the use of stronger inter-regional transmission lines to link widely distributed wind farms. Problems of variability are addressed by grid energy storage, batteries, pumped-storage hydroelectricity and energy demand management.
Since wind speed is not constant, a wind farm's annual energy production is never as much as the sum of the generator nameplate ratings multiplied by the total hours in a year. The ratio of actual productivity in a year to this theoretical maximum is called the capacity factor. Typical capacity factors are 20–40%, with values at the upper end of the range in particularly favourable sites. For example, a 1MW turbine with a capacity factor of 35% will not produce 8,760 MWh in a year (1 × 24 × 365), but only 1 × 0.35 × 24 × 365 = 3,066 MWh, averaging to 0.35 MW. Online data is available for some locations and the capacity factor can be calculated from the yearly output.
Unlike fueled generating plants, the capacity factor is limited by the inherent properties of wind. Capacity factors of other types of power plant are based mostly on fuel cost, with a small amount of downtime for maintenance. Nuclear plants have low incremental fuel cost, and so are run at full output and achieve a 90% capacity factor. Plants with higher fuel cost are throttled back to follow load. Gas turbine plants using natural gas as fuel may be very expensive to operate and may be run only to meet peak power demand. A gas turbine plant may have an annual capacity factor of 5–25% due to relatively high energy production cost.
According to a 2007 Stanford University study published in the Journal of Applied Meteorology and Climatology, interconnecting ten or more wind farms can allow an average of 33% of the total energy produced to be used as reliable, baseload electric power, as long as minimum criteria are met for wind speed and turbine height.

Friday, November 20, 2009

HEALTH AND NUTRITION

Nutrition (also called nourishment or aliment) is the provision, to cells and organisms, of the materials necessary (in the form of food) to support life. Many common health problems can be prevented or alleviated with a healthy diet.
The diet of an organism is what it eats, and is largely determined by the perceived palatability of foods. Dietitians are health professionals who specialize in human nutrition, meal planning, economics, and preparation. They are trained to provide safe, evidence-based dietary advice and management to individuals (in health and disease), as well as to institutions.
A poor diet can have an injurious impact on health, causing deficiency diseases such as scurvy, beriberi, and kwashiorkor; health-threatening conditions like obesity and metabolic syndrome, and such common chronic systemic diseases as cardiovascular disease, diabetes, and osteoporosis.
There are seven major classes of nutrients: carbohydrates, fats, fiber, minerals, protein, vitamin, and water.
These nutrient classes can be categorized as either macronutrients (needed in relatively large amounts) or micronutrients (needed in smaller quantities). The macronutrients are carbohydrates, fats, fiber, proteins, and water. The micronutrients are minerals and vitamins.
The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), energy. Some of the structural material can be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital C to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats provide 37 kJ (9 kcal) per gram.[1], though the net energy from either depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class dietary material, fiber (ie, non-digestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear.
Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers bound to glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids, some of which are essential in the sense that humans cannot make them internally. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs normally only during prolonged starvation.
Other micronutrients include antioxidants and phytochemicals which are said to influence (or protect) some body systems. Their necessity is not as well established as in the case of, for instance, vitamins.
Most foods contain a mix of some or all of the nutrient classes, together with other substances such as toxins or various sorts. Some nutrients can be stored internally (eg, the fat soluble vitamins), while others are required more or less continuously. Poor health can be caused by a lack of required nutriHeart disease, cancer, obesity, and diabetes are commonly called "Western" diseases because these maladies were once rarely seen in developing countries. One study in China found some regions had essentially no cancer or heart disease, while in other areas they reflected "up to a 100-fold increase" coincident with diets that were found to be entirely plant-based to heavily animal-based, respectively.[24] In contrast, diseases of affluence like cancer and heart disease are common throughout the United States. Adjusted for age and exercise, large regional clusters of people in China rarely suffered from these "Western" diseases possibly because their diets are rich in vegetables, fruits and whole grains.[24]
The United Healthcare/Pacificare nutrition guideline recommends a whole plant food diet, and recommends using protein only as a condiment with meals. A National Geographic cover article from November, 2005, entitled The Secrets of Living Longer, also recommends a whole plant food diet. The article is a lifestyle survey of three populations, Sardinians, Okinawans, and Adventists, who generally display longevity and "suffer a fraction of the diseases that commonly kill people in other parts of the developed world, and enjoy more healthy years of life." In sum, they offer three sets of 'best practices' to emulate. The rest is up to you. In common with all three groups is to "Eat fruits, vegetables, and whole grains."
The National Geographic article noted that an NIH funded study of 34,000 Seventh-day Adventists between 1976 and 1988 "…found that the Adventists' habit of consuming beans, soy milk, tomatoes, and other fruits lowered their risk of developing certain cancers. It also suggested that eating whole grain bread, drinking five glasses of water a day, and, most surprisingly, consuming four servings of nuts a week reduced their risk of heart disease."`ents or, in extreme cases, too much of a required nutrient. For example, both salt and water (both absolutely required) will cause illness or even death in too large amounts.

Thursday, November 19, 2009

ABOUT MAGNETISM

The term magnetism is used to describe how materials respond on the microscopic level to an applied magnetic field; to categorize the magnetic phase of a material. For example, the most well known form of magnetism is ferromagnetism such that some ferromagnetic materials produce their own persistent magnetic field. However, all materials are influenced to greater or lesser degree by the presence of a magnetic field. Some are attracted to a magnetic field (paramagnetism); others are repulsed by a magnetic field (diamagnetism); others have a much more complex relationship with an applied magnetic field. Substances that are negligibly affected by magnetic fields are known as non-magnetic substances. They include copper, aluminium, water, and gases.
The magnetic state (or phase) of a material depends on temperature (and other variables such as pressure and applied magnetic field) so that a material may exhibit more than one form of magnetism depending on its temperature, etc.
In magnetic materials, the most important sources of magnetization are, more specifically, the electrons' orbital angular motion around the nucleus, and the electrons' intrinsic magnetic moment (see Electron magnetic dipole moment). The other potential sources of magnetism are much less important: For example, the nuclear magnetic moments of the nuclei in the material are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. (Nuclear magnetic moments are important in other contexts, particularly in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI).)
Ordinarily, the countless electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments (as a result of the Pauli exclusion principle; see Electron configuration), or combining into "filled subshells" with zero net orbital motion; in both cases, the electron arrangement is so as to exactly cancel the magnetic moments from each electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions, so that the material will not be magnetic.
However, sometimes (either spontaneously, or owing to an applied external magnetic field) each of the electron magnetic moments will be, on average, lined up. Then the material can produce a net total magnetic field, which can potentially be quite strong.
The magnetic behavior of a material depends on its structure (particularly its electron configuration, for the reasons mentioned above), and also on the temperature (at high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment).As a consequence of Einstein's theory of special relativity, electricity and magnetism are understood to be fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity[6].[10] In particular, a phenomenon that appears purely electric to one observer may be purely magnetic to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism (analogous to how relativity "mixes" space and time into spacetime).

ELECTRIC CURRENT AND CHARGE

The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively-charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons.However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.

An electric arc provides an energetic demonstration of electric current
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires
Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sinusoidal wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.
The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one ampThe capacitor is a device capable of storing charge, and thereby storing electrical energy in the resulting field. Conceptually, it consists of two conducting plates separated by a thin insulating layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.
The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second.The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.

WATER AND ITS ANAMOLOUS EXPANSION PROPERTY

A molecule is an aggregation of atomic nuclei and electrons that is sufficiently stable to possess observable properties— and there are few molecules that are more stable and difficult to decompose than H2O. In water, each hydrogen nucleus is bound to the central oxygen atom by a pair of electrons that are shared between them; chemists call this shared electron pair a covalent chemical bond. In H2O, only two of the six outer-shell electrons of oxygen are used for this purpose, leaving four electrons which are organized into two non-bonding pairs. The four electron pairs surrounding the oxygen tend to arrange themselves as far from each other as possible in order to minimize repulsions between these clouds of negative charge. This would ordinarly result in a tetrahedral geometry in which the angle between electron pairs (and therefore the H-O-H bond angle) is 109.5°. However, because the two non-bonding pairs remain closer to the oxygen atom, these exert a stronger repulsion against the two covalent bonding pairs, effectively pushing the two hydrogen atoms closer together. The result is a distorted tetrahedral arrangement in which the H—O—H angle is 104.5°.
Because molecules are smaller than light waves, they cannot be observed directly, and must be "visualized" by alternative means. This computer-generated image comes from calculations that model the electron distribution in the H2O molecule. The outer envelope shows the effective "surface" of the molecule as defined by the extent of the cloud of negative electric charge created by the ten electrons.
Water has long been known to exhibit many physical properties that distinguish it from other small molecules of comparable mass. Chemists refer to these as the "anomalous" properties of water, but they are by no means mysterious; all are entirely predictable consequences of the way the size and nuclear charge of the oxygen atom conspire to distort the electronic charge clouds of the atoms of other elements when these are chemically bonded to the oxygen.
Water is one of the few known substances whose solid form is less dense than the liquid. The plot at the right shows how the volume of water varies with the temperature; the large increase (about 9%) on freezing shows why ice floats on water and why pipes burst when they freeze. The expansion between –4° and 0° is due to the formation of larger hydrogen-bonded aggregates. Above 4°, thermal expansion sets in as vibrations of the O—H bonds becomes more vigorous, tending to shove the molecules farther apart.
The nature of liquid water and how the H2O molecules within it are organized and interact are questions that have attracted the interest of chemists for many years. There is probably no liquid that has received more intensive study, and there is now a huge literature on this subject.
The following facts are well established:
H2O molecules attract each other through the special type of dipole-dipole interaction known as hydrogen bonding
a hydrogen-bonded cluster in which four H2Os are located at the corners of an imaginary tetrahedron is an especially favorable (low-potential energy) configuration, but...
the molecules undergo rapid thermal motions on a time scale of picoseconds (10–12 second), so the lifetime of any specific clustered configuration will be fleetingly brief.
A variety of techniques including infrared absorption, neutron scattering, and nuclear magnetic resonance have been used to probe the microscopic structure of water. The information garnered from these experiments and from theoretical calculations has led to the development of around twenty "models" that attempt to explain the structure and behavior of water. More recently, computer simulations of various kinds have been employed to explore how well these models are able to predict the observed physical properties of water.

Wednesday, November 18, 2009

THE ATMOSPHERE AND ITS LAYERS

The Earth's atmosphere is a layer of gases surrounding the planet Earth that is retained by Earth's gravity. The atmosphere protects life on Earth by absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night. Dry air contains roughly (by volume) 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.038% carbon dioxide, and trace amounts of other gases. Air also contains a variable amount of water vapor, on average around 1%.
The atmosphere has a mass of about five quintillion (5x1018) kg, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. An altitude of 120 km (75 mi) is where atmospheric effects become noticeable during atmospheric reentry of spacecraft. The Kármán line, at 100 km (62 mi), also is often regarded as the boundary between atmosphere and outer space.
Air is mainly composed of nitrogen, oxygen, and argon, which together constitute the "major gases" of the atmosphere. The remaining gases often are referred to as "trace gases,"among which are the greenhouse gases such as water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Filtered air includes trace amounts of many other chemical compounds. Many natural substances may be present in tiny amounts in an unfiltered air sample, including dust, pollen and spores, sea spray, volcanic ash, and meteoroids. Various industrial pollutants also may be present, such as chlorine (elementary or in compounds), fluorine (in compounds), elemental mercury, and sulfur (in compounds such as sulfur dioxide [SO2]).

Layers of the atmosphere

Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From lowest to highest, these layers are:
1> Troposphere
The troposphere begins at the surface and extends to between 7 km (23,000 ft) at the poles and 17 km (56,000 ft) at the equator, with some variation due to weather. The troposphere is mostly heated by transfer of energy from the surface, so on average the lowest part of the troposphere is warmest and temperature decreases with altitude. This promotes vertical mixing (hence the origin of its name in the Greek word "τροπή", trope, meaning turn or overturn). The troposphere contains roughly 80%[citation needed] of the mass of the atmosphere. The tropopause is the boundary between the troposphere and stratosphere.
2> Stratosphere
The stratosphere extends from the tropopause to about 51 km (32 mi; 170,000 ft). Temperature increases with height, which restricts turbulence and mixing. The stratopause, which is the boundary between the stratosphere and mesosphere, typically is at 50 to 55 km (31 to 34 mi; 160,000 to 180,000 ft). The pressure here is 1/1000th sea level.
3> Mesosphere
The mesosphere extends from the stratopause to 80–85 km (50–53 mi; 260,000–280,000 ft). It is the layer where most meteors burn up upon entering the atmosphere. Temperature decreases with height in the mesosphere. The mesopause, the temperature minimum that marks the top of the mesosphere, is the coldest place on Earth and has an average temperature around −100 °C (−148.0 °F; 173.1 K).
4> Thermosphere
Temperature increases with height in the thermosphere from the mesopause up to the thermopause, then is constant with height. The temperature of this layer can rise to 1,500 °C (2,730 °F), though the gas molecules are so far apart that temperature in the usual sense is not well defined. The International Space Station orbits in this layer, between 320 and 380 km (200 and 240 mi). The top of the thermosphere is the bottom of the exosphere, called the exobase. Its height varies with solar activity and ranges from about 350–800 km (220–500 mi; 1,100,000–2,600,000 ft).
5> Exosphere
The outermost layer of Earth's atmosphere extends from the exobase upward. Here the particles are so far apart that they can travel hundreds of km without colliding with one another. Since the particles rarely collide, the atmosphere no longer behaves like a fluid. These free-moving particles follow ballistic trajectories and may migrate into and out of the magnetosphere or the solar wind. The exosphere is mainly composed of hydrogen and helium.

POLYMERIZATION PROCESS

In polymer chemistry, polymerization is a process of reacting monomer molecules together in a chemical reaction to form three-dimensional networks or polymer chains.There are many forms of polymerization and different systems exist to categorize them.In chemical compounds, polymerization occurs via a variety of reaction mechanisms that vary in complexity due to functional groups present in reacting compounds and their inherent steric effects explained by VSEPR Theory. In more straightforward polymerization, alkenes, which are relatively stable due to σ bonding between carbon atoms form polymers through relatively simple radical reactions; in contrast, more complex reactions such as those that involve substitution at the carbonyl group require more complex synthesis due to the way in which reacting molecules polymerize.
As alkenes can be formed in somewhat straightforward reaction mechanisms, they form useful compounds such as polyethylene and polyvinyl chloride (PVC) when undergoing radical reactions, which are produced in high tonnages each year due to their usefulness in manufacturing processes of commercial products, such as piping, insulation and packaging. Polymers such as PVC are generally referred to as "homopolymers" as they consist of repeated long chains or structures of the same monomer unit, whereas polymers that consist of more than one molecule are referred to as "co-polymers".
Other monomer units, such as formaldehyde hydrates or simple aldehydes, are able to polymerize themselves at quite low temperatures (>-80oC) to form trimers;molecules consisting of 3 monomer units which can cyclize to form ring cyclic structures, or undergo further reactions to form tetramers,or 4 monomer-unit compounds. Further compounds either being referred to as oligomer in smaller molecules. Generally, because formaldehyde is an exceptionally reactive electrophile it allows nucleophillic addition of hemiacetal intermediates, which are generally short lived and relatively unstable "mid stage" compounds which react with other molecules present to form more stable polymeric compounds.
Polymerization that is not sufficiently moderated and proceeds at a fast rate can be very hazardous. This phenomenon is known as Hazardous polymerization and can cause fires and explosions.
Chain-growth polymerization (or addition polymerization) involves the linking together of molecules incorporating double or triple chemical bonds. These unsaturated monomers (the identical molecules that make up the polymers) have extra internal bonds that are able to break and link up with other monomers to form the repeating chain. Chain-growth polymerization is involved in the manufacture of polymers such as polyethylene, polypropylene, and polyvinyl chloride (PVC). A special case of chain-growth polymerization leads to living polymerization.
In the radical polymerization of ethylene, its pi bond is broken, and the two electrons rearrange to create a new propagating center like the one that attacked it. The form this propagating center takes depends on the specific type of addition mechanism. There are several mechanisms through which this can be initiated. The free radical mechanism was one of the first methods to be used. Free radicals are very reactive atoms or molecules that have unpaired electrons. Taking the polymerization of ethylene as an example, the free radical mechanism can be divided in to three stages: chain initiation, chain propagation, and chain termination.
Free radical addition polymerization of ethylene must take place at high temperatures and pressures, approximately 300°C and 2000 atm. While most other free radical polymerizations do not require such extreme temperatures and pressures, they do tend to lack control. One effect of this lack of control is a high degree of branching. Also, as termination occurs randomly, when two chains collide, it is impossible to control the length of individual chains. A newer method of polymerization similar to free radical, but allowing more control involves the Ziegler-Natta catalyst, especially with respect to polymer branching.
Other forms of chain growth polymerization include cationic addition polymerization and anionic addition polymerization. While not used to a large extent in industry yet due to stringent reaction conditions such as lack of water and oxygen, these methods provide ways to polymerize some monomers that cannot be polymerized by free radical methods such as polypropylene. Cationic and anionic mechanisms are also more ideally suited for living polymerizations, although free radical living polymerizations have also been developed.