BioChemistry&Physics



                                           PAPER – IV – Biochemistry and Biophysics





Syllabus
Unit - I

Amino Acids: Structure, Classification of amino acids and properties.  Proteins:  Classification of Proteins based on the chemical structure, properties

Unit - II

Carbohydrates: Structure, classification and properties of functional groups.  Lipids: Classification, properties – Saturated and unsaturated fatty acids – Cholesterol.

Unit - III

Enzymes: Classification, properties of enzymes, Mode of enzyme action, enzyme substrate compounds. Nucleic acids: DNA structure and properties, DNA synthesis- Mechanism of replication- nucleotides – Different types of RNA – mRNA and rRNA, tRNA.


Unit -IV

Principle and application of Chromatography (Paper, thin-layer, column and GLC), Centrifugation (RPM and G, Ultra centrifugation), Spectroscopic techniques (UV, visible spectroscopy, X-ray crystallography, NMR, IR, fluorescence & atomic absorption),


Unit -V

Biomedical Instrumentation:  Electrophoresis – Principle, Instrumentation, Applications; PCR Technique; Applications. Isotopes and their importance (GM counters & Scintillation counting).






UNIT – I
1.    Amino acids
 Amino acids are molecules containing an amine group, a carboxylic acid group and a side chain that varies between different amino acids. These molecules contain the key elements of carbon, hydrogen, oxygen, and nitrogen. These molecules are particularly important in biochemistry, where this term refers to alpha-amino acids with the general formula H2NCHRCOOH, where R is an organic substituent. In an alpha amino acid, the amino and carboxylate groups are attached to the same carbon atom, which is called the α–carbon. The various alpha amino acids differ in which side chain (R group) is attached to their alpha carbon. These side chains can vary in size from just a hydrogen atom in glycine, to a methyl group in alanine, through to a large heterocyclic group in tryptophan.
Amino acids are critical to life, and have many functions in metabolism. One particularly important function is as the building blocks of proteins, which are linear chains of amino acids. Every protein is chemically defined by this primary structure, its unique sequence of amino acid residues, which in turn define the three-dimensional structure of the protein. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked together in varying sequences to form a vast variety of proteins. Amino acids are also important in many other biological molecules, such as forming parts of coenzymes, as in S-adenosylmethionine, or as precursors for the biosynthesis of molecules such as heme. Due to this central role in biochemistry, amino acids are very important in nutrition. Amino acids are commonly used in food technology and industry.
1.1 Structure of amino acids
In the structure shown, R represents a side chain specific to each amino acid. The carbon atom next to the carbonyl group is called the α–carbon and amino acids with a side chain bonded to this carbon are referred to as alpha amino acids. These are the most common form found in nature. In the alpha amino acids, the α–carbon is a chiral carbon atom, with the exception of glycine. In amino acids that have a carbon chain attached to the α–carbon (such as lysine, shown to the right) the carbons are labeled in order as α, β, γ, δ, and so on. In some amino acids, the amine group is attached to the β or γ-carbon, and these are therefore referred to as beta or gamma amino acids.

The general structure of an alpha amino acid.
Amino acids are usually classified by the properties of their side chain into four groups. The side chain can make an amino acid a weak acid or a weak base, and a hydrophile if the side chain is polar or a hydrophobe if it is nonpolar. The chemical structures of the twenty-two standard amino acids, along with their chemical properties, are described more fully in the article on these proteinogenic amino acids.
The phrase "branched-chain amino acids" or BCAA refers to the amino acids having aliphatic side chains that are non-linear; these are leucine, isoleucine, and valine. Proline is the only proteinogenic amino acid whose side group links to the α-amino group and, thus, is also the only proteinogenic amino acid containing a secondary amine at this position. Chemically, proline is therefore an amino acid since it lacks a primary amino group, although it is still classed as an amino acid in the current biochemical nomenclature, and may also be called an "N-alkylated alpha-amino acid".

Isomerism
Of the standard α-amino acids, all but glycine can exist in either of two optical isomers, called L or D amino acids, which are mirror images of each other. While L-amino acids represent all of the amino acids found in proteins during translation in the ribosome, D-amino acids are found in some proteins produced by enzyme posttranslational modification after translation and translocation to the endoplasmic reticulum, as in exotic sea-dwelling organisms such as cone snails. They are also abundant components of the peptidoglycan cell walls of bacteria and D-serine may act as a neurotransmitter in the brain. The L and D convention for amino acid configuration refers not to the optical activity of the amino acid itself, but rather to the optical activity of the isomer of glyceraldehyde from which that amino acid can theoretically be synthesized (D-glyceraldehyde is dextrorotary; L-glyceraldehyde is levorotary). Alternatively, the (S) and (R) designators are used to indicate the absolute stereochemistry. Almost all of the amino acids in proteins are (S) at the carbon, with cysteine being (R) and glycine non-chiral. Cysteine is unusual since it has a sulfur atom at the first position in its side-chain, which has a larger atomic mass than the groups attached to the α-carbon in the other standard amino acids, thus the (R) instead of (S).

An amino acid in its (1) unionized and (2) zwitterionic forms
Zwitterions
Amino acids have both amine and carboxylic acid functional groups and are therefore both an acid and a base at the same time. At a certain pH known as the isoelectric point an amino acid has no overall charge, since the number of protonated ammonium groups (positive charges) and deprotonated carboxylate groups (negative charges) are equal. The amino acids all have different isoelectric points. The ions produced at the isoelectric point have both positive and negative charges and are known as a  zwitterion, which comes from the German word Zwitter meaning "hermaphrodite" or "hybrid". Amino acids can exist as zwitterions in solids and in polar solutions such as water, but not in the gas phase. Zwitterions have minimal solubility at their isolectric point and an amino acid can be isolated by precipitating it from water by adjusting the pH to its particular isoelectric point.
1.2 Classification amino acids
Amino acids are classified into different ways based on polarity, structure, nutritional requirement, metabolic fate, etc. Generally used classification is based on polarity.
Based on polarity amino acids are classified into four groups.
Non-polar amino acids
They have equal number of amino and carboxyls groups and are neutral.  These amino acids are hydrophobic and have no charge on the 'R' group. The amino acids in this group are alanine, valine, leucine, isoleucine, phenyl alanine, glycine, tryptophan, methionine and proline.



Polar amino acids with no charge
These amino acids do not have any charge on the 'R' group. These amino acids participate in hydrogen bonding of protein structure. The amino acids in this group are - serine, threonine, tyrosine, cysteine, glutamine and aspargine.


Polar amino acids with positive charge
Polar amino acids with positive charge have more amino groups as compared to carboxyl groups making it basic.  The amino acids, which have positive charge on the 'R' group are placed in this category. They are lysine, arginine and histidine.



Polar amino acids with negative charge
Polar amino acids with negative charge have more carboxyl groups than amino groups making them acidic. The amino acids, which have negative charge on the 'R' group are placed in this category. They are called as dicarboxylic mono-amino acids. They are aspartic acid and glutamic acid.

1.3 Physico-chemical properties of Amino acids
The 20 naturally occurring amino acids can be divided into several groups based on their properties. Important factors are charge, hydrophilicity or hydrophobicity, size and functional groups. These properties are important for protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them into the lipid bilayer. In the case part-way between these two extremes, some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that locks onto the membrane. Similarly, proteins that have to bind to positively-charged molecules have surfaces rich with negatively charged amino acids like glutamate and aspartate, while proteins binding to negatively-charged molecules have surfaces rich with positively charged chains like lysine and arginine. There are different hydrophobicity scales of amino acid residues.
Some amino acids have special properties such as cysteine, that can form covalent disulfide bonds to other cysteine residues, proline that forms a cycle to the polypeptide backbone, and glycine that is more flexible than other amino acids.
Many proteins undergo a range of posttranslational modifications, when additional chemical groups are attached to the amino acids in proteins. Some modifications can produce hydrophobic lipoproteins, or hydrophilic glycoproteins. These type of modification allow the reversible targeting of a protein to a membrane. For example, the addition and removal of the fatty acid palmitic acid to cysteine residues in some signaling proteins causes the proteins to attach and then detach from cell membranes.
1.4. Proteins
Proteins (also known as polypeptides) are organic compounds made of amino acids arranged in a linear chain and folded into a globular form. The amino acids in a polymer are joined together by the peptide bonds between the carboxyl and amino groups of adjacent amino acid residues. The sequence of amino acids in a protein is defined by the sequence of a gene, which is encoded in the genetic code. Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyze biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle and the proteins in the cytoskeleton, which form a system of scaffolding that maintains cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. Proteins are also necessary in animals' diets, since animals cannot synthesize all the amino acids they need and must obtain essential amino acids from food. Through the process of digestion, animals break down ingested protein into free amino acids that are then used in metabolism.
Proteins were first described by the Dutch chemist Gerhardus Johannes Mulder and named by the Swedish chemist Jons Jakob Berzelius in 1838. The first protein to be sequenced was insulin, by Frederick Sanger, who won the Nobel Prize for this achievement in 1958.
Protein classification
A. Simple Proteins
1.    Albumins: blood (serumbumin); milk (lactalbumin); egg white (ovolbumin); lentils (legumelin); kidney beans (phaseolin); wheat (leucosin). Globular protein; soluble in water and dilute salt solution; precipitated by saturation with ammonium sulfate solution; coagulated by heat; found in plant and animal tissues.
2.    Globulins: blood (serum globulins); muscle (myosin); potato (tuberin); Brazil nuts (excelsin); hemp (edestin); lentils (legumin). Globular protein; sparingly soluble in water; soluble in neutral solutions; precipitated by dilute ammonium sulfate and coagulated by heat; distributed in both plant and animal tissues.
3.    Glutelins: wheat (glutenin); rice (oryzenin). Insoluble in water and dilute salt solutions; soluble in dilute acids; found in grains and cereals.
4.    Prolamines: wheat and rye (gliadin); corn (zein); rye (secaline); barley (hordein). Insoluble in water and absolute alcohol; soluble in 70% alcohol; high in amide nitrogen and proline; occurs in grain seeds.
5.    Protamines: sturgeon (sturine); mackerel (scombrine); salmon (salmine); herring (clapeine). Soluble in water; not coagulated by heat; strongly basic; high in arginine; associate with DNA; occurs in sperm cells.
6.    Histones: Thymus gland; pancreas; nucleoproteins (nucleohistone). Soluble in water, salt solutions, and dilute acids; insoluble in ammonium hydroxide; yields large amounts of lysine and arginine; combined with nucleic acids within cells.
7.    Scleroproteins: Connective tissues and hard tissues. Fibrous protein; insoluble in all solvents and resistant to digestion.
a.    Collagen: connective tissues, bones, cartilage, and gelatin. Resistant to digestive enzymes but altered to digest gelatin by boiling water, acid, or alkali; high in hydroxylrpline.
b.    Elastin: Ligaments, tendons, and arteries. Similar to collagen but cannot be converted to gelatin.
c.    Keratin: Hair, nails, hooves, horns, and feathers. Partially resistant to digestive enzymes; contains large amounts of sulfur, as cystine.
B.    Conjugated Proteins
1.    Nucleoproteins: cytoplasm of cells (ribonucleoprotein); nucleas of chromosomes (deoxyribonucleoprotein) viruses, and bacteriophages. Contains nucleic acids, nitrogen, and phosphorus. Present in chromosomes and in all living forms as a combination of protein with either RNA or DNA.
2.    Mucoprotein: saliva (mucin); egg white (ovomucoid). Proteins combined with amino sugars, sugar acids, and sulfates.
3.    Glycoprotein: bone (osseomucoid); tendons (tendomucoid); carilage (chondromucoid). Containing more than 4% hexosamine, mucoproteins; if less than 4%, then glycoproteins.
4.    Phosphoproteins: milk (casein); egg yolk (ovovitellin). Phosphoric acid joined in ester linkage to protein.
5.    Chromoproteins: hemoglobin; myoglobin; flavoproteins; respiratory pigments; cytochromes. Protein compounds with such nonprotein pigments as heme; colored proteins.
6.    Lipoproteins: serum lipoprotein; brain, nerve tissues, milk, and eggs. Water-soluble protein conjugated with lipids; found dispersed widely in all cells and all living forms.
7.    Metallo proteins: ferritin; carbonic anhydrase; ceruloplasmin. Proteins combined with metallic atoms that are not parts of a nonprotein prosthetic group.
C.    Derived Proteins
1.    Proteans: edestan (from elastin) and myosan (from myosin). Results from short action of acids or enzymes; insolvent in water.
2.    Proteases: intermediate products of protein digestion. Soluble in water; uncoagulated by heat; and precipitated by saturated ammonium sulfate; result from partial digestion of protein by pepsin or trypsin.
3.    Peptones: intermediate products of protein digestion. Same properties as proteases except that they cannot be salted out; of smaller molecular weight that proteases.
4.    Peptides: intermediate products of protein digestion. Two or more amino acids joined by a peptide linkage; hydrolyzed to individual amino acids.
 Protein structure
There are four distinct levels of protein structure.
Primary structure
The primary structure refers to the sequence of the different amino acids of the peptide or protein. The primary structure is held together by covalent or peptide bonds, which are made during the process of protein biosynthesis or translation. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often however, it is read directly from the sequence of the gene using the genetic code. Post-translational modifications such as disulfide formation, phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene.



Secondary structure
Secondary structure refers to highly regular local sub-structures. Two main types of secondary structure, the alpha helix and the beta strand, were suggested in 1951 by Linus Pauling and coworkers. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the alpha helix and the beta-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be mixed with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit".


Tertiary structure
Tertiary structure refers to three-dimensional structure of a single protein molecule. The alpha-helices and beta-sheets are folded into a compact globule. The folding is driven by the non-specific hydrophobic interactions (the burial of hydrophobic residues from water), but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol is generally a reducing environment.

Quaternary structure
Quaternary structure is a larger assembly of several protein molecules or polypeptide chains, usually called subunits in this context. The quaternary structure is stabilized by the same non-covalent interactions and disulfide bonds as the tertiary structure. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits and a tetramer if it contains four subunits. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" (e.g. a homotetramer) and those made up of different subunits are referred to with a prefix of "hetero-" (e.g. a heterotetramer, such as the two alpha and two beta chains of hemoglobin). Many proteins do not have the quaternary structure and function as monomers.

Properties of protein
The following are the properties of proteins :-
Solubility in water
The relationship of proteins with water is complex. The secondary structure of proteins depends largely on the interaction of peptide bonds with water through hydrogen bonds. Hydrogen bonds are also formed between radicals is the protein (alpha and beta structures) between its hydrophilic free radicals and water. The protein-rich static ball are prior more soluble than the helical structures. At the tertiary structure, water causes the orientation of the chains and hydrophilic radicals to the outside of the molecule, while the hydrophobic chains and radicals tend to react with each other within the molecule (cf. hydrophobic effect). The solubility of proteins in an aqueous solution containing salts depends on two opposing effects on the one hand related to electrostatic interactions ("salting in") and other hydrophobic interactions (salting out). 
Denaturation
A protein is denatured when its specific three-dimensional conformation is changed by breaking some bonds without breaking its primary structure. It may be, for example, the disruption of α helix area. The denaturation may be reversible or irreversible. It causes a total or partial loss of biological activity. This is an important property of protein.
Denaturing agents are numerous:
•    Physical agents: heat, radiation, pH;
•    Chemical agents: urea solution which forms new hydrogen bonds in the protein, organic solvents, detergents
****************
UNIT – II
2. Carbohydrates
Introduction
Before embarking on a study of carbohydrates—their role in the body, their sources, etc., we will begin by highlighting the importance of carbohydrates, defining what carbohydrates are and learning how they are formed, as well as glimpsing at a brief history of carbohydrates in the human diet.
The Importance of Carbohydrates
The process of digestion could not occur without the energy provided by carbohydrates. Without carbohydrates we would not be able to think or move and our heart couldn't beat.
Whether it is digestion or circulation, thinking or walking, all life activities are dependent upon carbohydrates. When insufficient carbohydrates are available from the diet, the body converts fat reserves to carbohydrates for its use, and amino acids are utilized as carbohydrates instead of being used to make body protein.
What Are Carbohydrates?
Carbohydrates provide fuel, or energy, for the human body. These organic (carbon-containing) compounds are an integral part of both plant and animal life, and, as stated above, life as we know it could not exist without them.
Carbohydrates are made up of three elements: carbon, hydrogen and oxygen—carbohydrates. As you will learn in a later lesson, fats are also comprised of carbon, hydrogen and oxygen, but they have less oxygen and more carbon and hydrogen than carbohydrates.
Carbohydrates, along with proteins and fats, comprise the major components of living matter and are used for maintenance of cellular functional activities and as reserve and structural materials for cells. Because they are the primary source of energy for the animal kingdom, carbohydrates are particularly important in a study of nutritional science.
How Carbohydrates Are Formed
Carbohydrates are formed by green plants in the process of photosynthesis. In photosynthesis, plant chlorophyll, plant enzymes, sunlight, carbon dioxide from the air, and mineralized water from the soil combine and, in a complicated process, synthesize carbohydrates. Humans obtain their carbohydrate needs most efficiently from the plant world.
2.1 Classification
Carbohydrates, also known as saccharides, are classified according to the number of single carbohydrate molecules in each chemical structure. Carbohydrate compounds having just one carbohydrate molecule are called monosaccharides; compounds with two carbohydrate molecules are called dissarcharides; and those compounds containing more than two carbohydrate molecules are named polysaccharides. All carbohydrates either are monosaccharides or can be hydrolyzed (broken down) into two or more monosaccharides.
For further understanding of these different classifications of carbohydrates, the monosaccharides and disaccharides can be grouped together and compared with the polysaccharides. This can be done because monosaccharides and disaccharides have certain things in common.
For one, they are both water soluble. In addition, they have a sweet taste and a crystalline structure. The monosaccharides and disaccharides are called sugars and all share the suffix, -ose, meaning sugar.
Polysaccharides, in contrast to mono- and disaccharides, are insoluble in water, do not taste sweet and do not form crystals. Also, they do not share a suffix and have no group name (such as sugars, in the case of mono-arid disaccharides). They are sometimes called starches, but this is technically incorrect because there are many other classifications of polysaccharides besides starches (cellulose and glycogen being two and dextrin being another).

2.1.1 Monosaccharides
These are the only sugars that can be absorbed and utilized by the body. Disaccharides and polysaccharides must be ultimately broken down into monosaccharides in the digestive process known as hydrolysis. Only then can they be utilized by the body. Three monosaccharides are particularly important in the study of nutritional science: glucose, fructose and galactose.
Glucose (also known as dextrose or grape sugar)
This monosaccharide is the most important carbohydrate in human nutrition because it is the one that the body fuses directly to supply its energy needs. Glucose is formed from the hydrolysis of di- and polysaccharides, including starch, dextrin, maltose, sucrose and lactose; from the monosaccharide fructose largely during absorption; and from both fructose and galactose in the liver during metabolism.
Glucose is the carbohydrate found in the bloodstream, and it provides an immediate source of energy for the body's cells and tissues. Glucose is also formed when stored body carbohydrate (glycogen) is broken down for use.
In the plant world, glucose is widely distributed. It is found in all plants and in the sap of trees. Fruits and vegetables are wholesome food sources of glucose. It is also present in such unwholesome (to humans) substances as molasses, honey and corn syrup.
Fructose (also known as levulose or fruit sugar)
Fructose, a monosaccharide, is very similar to another monosaccharide, galactose. These two simple sugars share the same chemical formula; however, the arrangements of their chemical groups along the chemical chain differ. Fructose is the sweetest of all the sugars and is found in fruits, vegetables and the nectar of flowers, as well as in the unwholesome (to humans) sweeteners, molasses and honey. In humans, fructose is produced during the hydrolysis of the disaccharide, sucrose.
Galactose
Galactose differs from the other simple sugars, glucose and fructose, in that it does not occur free in nature. It is produced in the body in the digestion of lactose, a disaccharide.
2.1.2  Disaccharides
Disaccharides, on hydrolysis, yield two monosaccharide molecules. Three particular disaccharides warrant discussion in a lesson on nutritional science: sucrose, maltose and lactose.
Sucrose
The disaccharide, sucrose, consists of one molecule of each of two monosaccharides—glucose and fructose. Sucrose is found in fruits and vegetables and is particularly plentiful in sugar beets (roots) and sugarcane (a grass). Refined white and brown sugars are close to 100% sucrose because almost everything else (including the other kinds of sugars present, the vitamins, the minerals and the proteins) have been removed in the refining process. Maple syrup and molasses are, like refined sugars, unwholesome sweeteners; both contain over 50% sucrose. It almost goes without saying that any foods, so-called, containing significant amounts of refined sugar are high in sucrose.
Maltose (also known as malt sugar)
This disaccharide, unlike sucrose, is not consumed in large amounts in the average American diet. It is found in malted cereals, malted milks and sprouted grains. Also, corn syrup is 26 percent maltose and corn sugar is 4 percent maltose. None of these "foods" is wholesome, with perhaps, the exception of sprouted grains.  Maltose occurs in the body as an intermediate product of starch digestion. (Starch is a polysaccharide.) When maltose is hydrolyzed, it yields two molecules of glucose.
Lactose (also known as milk sugar)
This disaccharide is found only in milk. Human milk contains about 4.8 g per 100 ml and cow's milk contains approximately 6.8 g per 100 ml. When lactose is hydrolyzed it yields one unit of the monosaccharide glucose and one unit of the monosaccharide galactose. The enzyme lactase is needed to digest lactose, and this enzyme is not present in most, if any, people over age three. This is one of the many reasons why milk is an unwholesome food for people over three years of age.
2.1.3 Polysaccharides
Like the disaccharides, the polysaccharides cannot be directly utilized by the body. They must first be broken down into monosaccharides, the only sugar form the body can use.  Polysaccharides contain up to 60,000 simple carbohydrate molecules. These carbohydrate molecules are arranged in long chains in either a straight or in a branched structure. There are four polysaccharides that are important in the study of nutritional science: starch, dextrin, glycogen and cellulose.
Starch
Starch is abundant in the plant world and is found in granular form in the cells of plants. Starch granules can be seen under a microscope and they differ in size, shape and markings in various plants. The starch granules of wheat, for example, are oval-shaped; whereas the starch granules of corn are small, rounded and angular.
These starch granules are laid down in the storage organs of plants—in the seeds, tubers, roots and stem pith. They provide a reserve food supply for the plant, sustain the root or tuber through the winter and nourish the growing embryo during germination.
Most starches are a mix of two different molecular structures, amylose and amylopectin. The former has a linear structure and the latter has a branched or bushy structure. The proportion of the two fractions varies according to the species of plant. For example, potato starch and most cereal starches have approximately 15-30% amylose. But the waxy cereal grains, including some varieties of corn plus rice and grain sorghum, have their starch most entirely as amylopectin. The starches in green peas and in some sweet corn varieties are mainly amylose.
The polysaccharides, as mentioned earlier, are not water soluble as are the mono- and disaccharides. Though not water soluble, starches can be dispersed in water heated to a certain temperature. The granules swell and gelatinize. When cooled, this gelatin sets to a paste. The jelling characteristics of starches are considered to result from the amylose present, while amylopectin is considered to be responsible for the gummy and cohesive properties of the paste.
Dextrin
There are several "varieties" of this polysaccharide. Dextrins are most commonly consumed in cooked starch foods, as they are obtained from starch by the action of heat. Dextrins are intermediary products of starch digestion, also, and are formed by the action of amylases on starches. They render the disaccharide maltose on hydrolysis.
Glycogen
Glycogen is the reserve carbohydrate in humans. It is to animals as starch is to plants. Glycogen is very similar to amylopectin, having a high molecular weight and branched-chain structures made up of thousands of glucose molecules. The main difference between glycogen and amylopectin is that glycogen has more and shorter branches, resulting in a more compact, bush like molecule with greater solubility and lower viscosity (less stickiness or gumminess).  Glycogen is stored primarily in the liver and muscles of animals. About two-thirds of total body glycogen is stored in the muscles and about one-third is stored in the liver.
Cellulose
Like starch and glycogen, cellulose is composed of thousands of glucose molecules. It comprises over 50% of the carbon in vegetation and is the structural constituent of the cell walls of plants. Cellulose is, therefore, the most abundant naturally-occurring organic substance. It is characterized by its insolubility, its chemical inertness and its physical rigidity. This polysaccharide can be digested only by herbivores such as cows, sheep, horses, etc., as these animals have bacteria in their rumens (stomachs) whose enzyme systems break down cellulose molecules. Humans do not have the enzyme needed to digest cellulose, so it is passed through the digestive tract unchanged.
2.2 Chemical structure and functional properties of carbohydrates
Carbohydrates consist of the elements carbon (C), hydrogen (H) and oxygen (O) with a ratio of hydrogen twice that of carbon and oxygen. Carbohydrates include sugars, starches, cellulose and many other compounds found in living organisms. In their basic form, carbohydrates are simple sugars or monosaccharides. These simple sugars can combine with each other to form more complex carbohydrates. The combination of two simple sugars is a disaccharide. Carbohydrates consisting of two to ten simple sugars are called oligosaccharides, and those with a larger number are called polysaccharides.
Sugars
Sugars are white crystalline carbohydrates that are soluble in water and generally have a sweet taste.
Monosaccharides are simple sugars
Monosaccharide classifications based on the number of carbons
Number of
Carbons    Category Name    Examples
4    Tetrose    Erythrose, Threose
5    Pentose    Arabinose, Ribose, Ribulose, Xylose, Xylulose, Lyxose
6    Hexose    Allose, Altrose, Fructose, Galactose, Glucose, Gulose, Idose, Mannose, Sorbose, Talose, Tagatose
7    Heptose    Sedoheptulose
Many saccharide structures differ only in the orientation of the hydroxyl groups (-OH). This slight structural difference makes a big difference in the biochemical properties, organoleptic properties (e.g., taste), and in the physical properties such as melting point and Specific Rotation (how polarized light is distorted). A chain-form monosaccharide that has a carbonyl group (C=O) on an end carbon forming an aldehyde group (-CHO) is classified as an aldose. When the carbonyl group is on an inner atom forming a ketone, it is classified as a ketose.
Tetroses



D-Erythrose    D-Threose
Pentoses





D-Ribose    D-Arabinose    D-Xylose    D-Lyxose
The ring form of ribose is a component of ribonucleic acid (RNA).   Deoxyribose, which is missing oxygen at position 2, is a component of deoxyribonucleic acid (DNA). In nucleic acids, the hydroxyl group attached to carbon number 1 is replaced with nucleotide bases.



Ribose    Deoxyribose
Hexoses
Hexoses, such as the ones illustrated here, have the molecular formula C6H12O6. German chemist Emil Fischer (1852-1919) identified the stereoisomers for these aldohexoses in 1894. He received the 1902 Nobel Prize for chemistry for his work.





D-Allose    D-Altrose    D-Glucose    D-Mannose






D-Gulose    D-Idose    D-Galactose    D-Talose
Structures that have opposite configurations of a hydroxyl group at only one position, such as glucose and mannose, are called epimers. Glucose, also called dextrose, is the most widely distributed sugar in the plant and animal kingdoms and it is the sugar present in blood as "blood sugar". The chain form of glucose is a polyhydric aldehyde, meaning that it has multiple hydroxyl groups and an aldehyde group. Fructose, also called levulose or "fruit sugar", is shown here in the chain and ring forms. The relationship between the chain and the ring forms of the sugars is discussed below. Fructose and glucose are the main carbohydrate constituents of honey.






D-Tagatose
(a ketose)    D-Fructose    Fructose    Galactose    Mannose
Heptoses
Sedoheptulose has the same structure as fructose, but it has one extra carbon.


D-Sedoheptulose
Chain and Ring forms
Many simple sugars can exist in a chain form or a ring form, as illustrated by the hexoses above. The ring form is favored in aqueous solutions, and the mechanism of ring formation is similar for most sugars. The glucose ring form is created when the oxygen on carbon number 5 links with the carbon comprising the carbonyl group (carbon number 1) and transfers its hydrogen to the carbonyl oxygen to create a hydroxyl group. The rearrangement produces alpha glucose when the hydroxyl group is on the opposite side of the -CH2OH group, or beta glucose when the hydroxyl group is on the same side as the -CH2OH group. Isomers, such as these, which differ only in their configuration about their carbonyl carbon atom are called anomers. The little D in the name derives from the fact that natural glucose is dextrorotary, i.e., it rotates polarized light to the right, but it now denotes a specific configuration. Monosaccharides forming a five-sided ring, like ribose, are called furanoses. Those forming six-sided rings, like glucose, are called pyranoses.

    
        
   
   

D-Glucose
(an aldose)    α-D-Glucose    β-D-Glucose    Cyclation of Glucose

Stereochemistry
Saccharides with identical functional groups but with different spatial configurations have different chemical and biological properties. Stereochemisty is the study of the arrangement of atoms in three-dimensional space. Stereoisomers are compounds in which the atoms are linked in the same order but differ in their spatial arrangement. Compounds that are mirror images of each other but are not identical, comparable to left and right shoes, are called enantiomers. The following structures illustrate the difference between β-D-Glucose and β-L-Glucose. Identical molecules can be made to correspond to each other by flipping and rotating. However, enantiomers cannot be made to correspond to their mirror images by flipping and rotating. Glucose is sometimes illustrated as a "chair form" because it is a more accurate representation of the bond angles of the molecule. The "boat" form of glucose is unstable.

   
   


β-D-Glucose      β-L-Glucose      β-D-Glucose
(chair form)
   




β-D-Glucose    β-L-Glucose    β-D-Glucose
(boat form)

Sugar Alcohols, Amino Sugars, and Uronic Acids
Sugars may be modified by natural or laboratory processes into compounds that retain the basic configuration of saccharides, but have different functional groups. Sugar alcohols, also known as polyols, polyhydric alcohols, or poly alcohols, are the hydrogenated forms of the aldoses or ketoses. For example, glucitol, also known as sorbitol, has the same linear structure as the chain form of glucose, but the aldehyde (-CHO) group is replaced with a -CH2OH group. Other common sugar alcohols include the monosaccharides erythritol and xylitol and the disaccharides lactitol and maltitol. Sugar alcohols have about half the calories of sugars and are frequently used in low-calorie or "sugar-free" products.
Xylitol, which has the hydroxyl groups oriented like xylose, is a very common ingredient in "sugar-free" candies and gums because it is approximately as sweet as sucrose, but contains 40% less food energy. Although this sugar alcohol appears to be safe for humans, xylitol in relatively small doses can cause seizures, liver failure, and death in dogs.
Amino sugars or amino saccharides replace a hydroxyl group with an amino (-NH2) group. Glucosamine is an amino sugar used to treat cartilage damage and reduce the pain and progression of arthritis.
Uronic acids have a carboxyl group (-COOH) on the carbon that is not part of the ring. Their names retain the root of the monosaccharides, but the -ose sugar suffix is changed to -uronic acid. For example, galacturonic acid has the same configuration as galactose, and the structure of glucuronic acid corresponds to glucose.

   
   


Glucitol or Sorbitol  
(a sugar alcohol)      Glucosamine
  (an amino sugar)       Glucuronic acid
(a uronic acid)
Disaccharides consist of two simple sugars
Disaccharide descriptions and components
Disaccharide    Description    Component monosaccharides
sucrose    common table sugar    glucose 1α→2 fructose
maltose    product of starch hydrolysis    glucose 1α→4 glucose
trehalose    found in fungi    glucose 1α→1 glucose
lactose    main sugar in milk    galactose 1β→4 glucose
melibiose    found in legumes    galactose 1α→6 glucose

    

Sucrose    Lactose    Maltose
Sucrose, also called saccharose, is ordinary table sugar refined from sugar cane or sugar beets. It is the main ingredient in turbinado sugar, evaporated or dried cane juice, brown sugar, and confectioner's sugar. Lactose has a molecular structure consisting of galactose and glucose. It is of interest because it is associated with lactose intolerance which is the intestinal distress caused by a deficiency of lactase, an intestinal enzyme needed to absorb and digest lactose in milk. Undigested lactose ferments in the colon and causes abdominal pain, bloating, gas, and diarrhea. Yogurt does not cause these problems because lactose is consumed by the bacteria that transform milk into yogurt.
Maltose consists of two α-D-glucose molecules with the alpha bond at carbon 1 of one molecule attached to the oxygen at carbon 4 of the second molecule. This is called a 1α→4 glycosidic linkage. Trehalose has two α-D-glucose molecules connected through carbon number one in a 1α→1 linkage. Cellobiose is a disaccharide consisting of two β-D-glucose molecules that have a 1β→4 linkage as in cellulose. Cellobiose has no taste, whereas maltose and trehalose are about one-third as sweet as sucrose.
Trisaccharides
Raffinose, also called melitose, is a trisaccharide that is widely found in legumes and cruciferous vegetables, including beans, peas, cabbage, brussels sprouts, and broccoli. It consists of galactose connected to sucrose via a 1α→6 glycosidic linkage. Humans cannot digest saccharides with this linkage and the saccharides are fermented in the large intestine by gas-producing bacteria. Tablets containing the enzyme alpha-galactosidase, such as Beano, are frequently used as digestive aids to prevent gas and bloating. The enzyme is derived from selected strains of the food grade fungus Aspergillus niger.

Raffinose

Polysaccharides are polymers of simple sugars
Many polysaccharides, unlike sugars, are insoluble in water. Dietary fiber includes polysaccharides and oligosaccharides that are resistant to digestion and absorption in the human small intestine but which are completely or partially fermented by microorganisms in the large intestine. The polysaccharides described below play important roles in nutrition, biology, or food preparation.
Starch
Starch is the major form of stored carbohydrate in plants. Starch is composed of a mixture of two substances: amylose, an essentially linear polysaccharide, and amylopectin, a highly branched polysaccharide. Both forms of starch are polymers of α-D-Glucose. Natural starches contain 10-20% amylose and 80-90% amylopectin. Amylose forms a colloidal dispersion in hot water (which helps to thicken gravies) whereas amylopectin is completely insoluble.
•    Amylose molecules consist typically of 200 to 20,000 glucose units which form a helix as a result of the bond angles between the glucose units.


Amylose
•    Amylopectin differs from amylose in being highly branched. Short side chains of about 30 glucose units are attached with 1α→6 linkages approximately every twenty to thirty glucose units along the chain. Amylopectin molecules may contain up to two million glucose units.



Amylopectin


The side branching chains are clustered together within the amylopectin molecule
Starches are transformed into many commercial products by hydrolysis using acids or enzymes as catalysts. Hydrolysis is a chemical reaction in which water is used to break long polysaccharide chains into smaller chains or into simple carbohydrates. The resulting products are assigned a Dextrose Equivalent (DE) value which is related to the degree of hydrolysis. A DE value of 100 corresponds to completely hydrolyzed starch, which is pure glucose (dextrose). Dextrins are a group of low-molecular-weight carbohydrates produced by the hydrolysis of starch. Dextrins are mixtures of polymers of D-glucose units linked by 1α→4 or 1α→6 glycosidic bonds. Malto dextrin is partially hydrolyzed starch that is not sweet and has a DE value less than 20. Syrups, such as corn syrup made from corn starch, have DE values from 20 to 91.  Commercial dextrose has DE values from 92 to 99. Corn syrup solids, which may be labeled as soluble corn fiber or resistant malto dextrin, are mildly sweet semi-crystalline or powdery amorphous products with DEs from 20 to 36 made by drying corn syrup in a vacuum or in spray driers. Resistant malto dextrin or soluble corn fiber are not broken down in the digestive system, but they are partially fermented by colonic bacteria thus providing only 2 Calories per gram instead of the 4 Calories per gram in corn syrup. High fructose corn syrup (HFCS), commonly used to sweeten soft drinks, is made by treating corn syrup with enzymes to convert a portion of the glucose into fructose. Commercial HFCS contains from 42% to 55% fructose, with the remaining percentage being mainly glucose. Modified starch is starch that has been changed by mechanical processes or chemical treatments to stabilize starch gels made with hot water. Without modification, gelled starch-water mixtures lose viscosity or become rubbery after a few hours. Hydrogenated glucose syrup(HGS) is produced by hydrolyzing starch, and then hydrogenating the resulting syrup to produce sugar alcohols like maltitol and sorbitol, along with hydrogenated oligo- and polysaccharides. Poly dextrose (poly-D-glucose) is a synthetic, highly-branched polymer with many types of glycosidic linkages created by heating dextrose with an acid catalyst and purifying the resulting water-soluble polymer. Poly dextrose is used as a bulking agent because it is tasteless and is similar to fiber in terms of its resistance to digestion. The name resistant starch is applied to dietary starch that is not degraded in the stomach and small intestine, but is fermented by microflora in the large intestine.
Glycogen
Glucose is stored as glycogen in animal tissues by the process of glycogenesis. When glucose cannot be stored as glycogen or used immediately for energy, it is converted to fat. Glycogen is a polymer of α-D-Glucose identical to amylopectin, but the branches in glycogen tend to be shorter (about 13 glucose units) and more frequent. The glucose chains are organized globularly like branches of a tree originating from a pair of molecules of glycogenin, a protein with a molecular weight of 38,000 that acts as a primer at the core of the structure. Glycogen is easily converted back to glucose to provide energy.


Glycogen

Dextran
Dextran is a polysaccharide similar to amylopectin, but the main chains are formed by 1α→6 glycosidic linkages and the side branches are attached by 1α→3 or 1α→4 linkages. Dextran is an oral bacterial product that adheres to the teeth, creating a film called plaque. It is also used commercially in confections, in lacquers, as food additives, and as plasma volume expanders.


Dextran

Inulin
Some plants store carbohydrates in the form of inulin as an alternative, or in addition, to starch. Inulins are present in many vegetables and fruits, including onions, leeks, garlic, bananas, asparagus, chicory and Jerusalem artichokes. Inulins, also called fructans, are polymers consisting of fructose units that typically have a terminal glucose. Oligofructose has the same structure as inulin, but the chains consist of 10 or fewer fructose units. Oligofructose has approximately 30 to 50 percent of the sweetness of table sugar. Inulin is less soluble than oligofructose and has a smooth creamy texture that provides a fat-like mouth feel. Inulin and oligofructose are non-digestible by human intestinal enzymes, but they are totally fermented by colonic microflora. The short-chain fatty acids and lactate produced by fermentation contribute 1.5 kcal per gram of inulin or oligofructose. Inulin and oligofructose are used to replace fat or sugar and reduce the calories of foods like ice cream, dairy products, confections and baked goods.


Inulin     n = approx. 35
Cellulose
Cellulose is a polymer of β-D-Glucose, which in contrast to starch, is oriented with -CH2OH groups alternating above and below the plane of the cellulose molecule thus producing long, unbranched chains. The absence of side chains allows cellulose molecules to lie close together and form rigid structures. Cellulose is the major structural material of plants. Wood is largely cellulose, and cotton is almost pure cellulose. Cellulose can be hydrolyzed to its constituent glucose units by microorganisms that inhabit the digestive tract of termites and ruminants. Cellulose may be modified in the laboratory by treating it with nitric acid (HNO3) to replace all the hydroxyl groups with nitrate groups (-ONO2) to produce cellulose nitrate (nitrocellulose or guncotton) which is an explosive component of smokeless powder. Partially nitrated cellulose, known as pyroxylin, is used in the manufacture of collodion, plastics, lacquers, and nail polish.


Cellulose
Hemicellulose
The term "hemicellulose" is applied to the polysaccharide components of plant cell walls other than cellulose, or to polysaccharides in plant cell walls which are extractable by dilute alkaline solutions. Hemicelluloses comprise almost one-third of the carbohydrates in woody plant tissue. The chemical structure of hemicelluloses consists of long chains of a variety of pentoses, hexoses, and their corresponding uronic acids. Hemicelluloses may be found in fruit, plant stems, and grain hulls. Although hemicelluloses are not digestible, they can be fermented by yeasts and bacteria. The polysaccharides yielding pentoses on hydrolysis are called pentosans. Xylan is an example of a pentosan consisting of D-xylose units with 1β→4 linkages.


Xylan
Arabinoxylan
Arabinoxylans are polysaccharides found in the bran of grasses and grains such as wheat, rye, and barley. Arabinoxylans consist of a xylan backbone with L-arabinofuranose (L-arabinose in its 5-atom ring form) attached randomly by 1α→2 and/or 1α→3 linkages to the xylose units throughout the chain. Since xylose and arabinose are both pentoses, arabinoxylans are usually classified as pentosans. Arabinoxylans are important in the baking industry. The arabinose units bind water and produce viscous compounds that affect the consistency of dough, the retention of gas bubbles from fermentation in gluten-starch films, and the final texture of baked products.


Arabinoxylan
Chitin
Chitin is an unbranched polymer of N-Acetyl-D-glucosamine. It is found in fungi and is the principal component of arthropod and lower animal exoskeletons, e.g., insect, crab, and shrimp shells. It may be regarded as a derivative of cellulose, in which the hydroxyl groups of the second carbon of each glucose unit have been replaced with acetamido (-NH(C=O)CH3) groups.


Chitin

Beta-Glucan
Beta-glucans consist of linear unbranched polysaccharides of β-D-Glucose like cellulose, but with one 1β→3 linkage for every three or four 1β→4 linkages. Beta-glucans form long cylindrical molecules containing up to about 250,000 glucose units. Beta-glucans occur in the bran of grains such as barley and oats, and they are recognized as being beneficial for reducing heart disease by lowering cholesterol and reducing the glycemic response. They are used commercially to modify food texture and as fat substitutes.


Beta-Glucan

Glycosaminoglycans
Glycosaminoglycans are found in the lubricating fluid of the joints and as components of cartilage, synovial fluid, vitreous humor, bone, and heart valves. Glycosaminoglycans are long unbranched polysaccharides containing repeating disaccharide units that contain either of two amino sugar compounds -- N-acetylgalactosamine or N-acetylglucosamine, and a uronic acid such as glucuronate (glucose where carbon six forms a carboxyl group). Glycosaminoglycans are negatively charged, highly viscous molecules sometimes called mucopolysaccharides. The physiologically most important glycosaminoglycans are hyaluronic acid, dermatan sulfate, chondroitin sulfate, heparin, heparan sulfate, and keratan sulfate. Chondroitin sulfate is composed of β-D-glucuronate linked to the third carbon of N-acetylgalactosamine-4-sulfate as illustrated here. Heparin is a complex mixture of linear polysaccharides that have anticoagulant properties and vary in the degree of sulfation of the saccharide units.
  
  

Chondroitin Sulfate    Heparin

Agar and Carrageenan
Agar (agar agar) is extracted from seaweed and is used in many foods as a gelling agent. Agar is a polymer of agarobiose, a disaccharide composed of D-galactose and 3,6-anhydro-L-galactose. Highly refined agar is used as a medium for culturing bacteria, cellular tissues, and for DNA fingerprinting. Agar is used as an ingredient in desserts in Japan and other Asian countries. The gels produced with agar have a crispier texture than the desserts made with animal gelatin.
Carrageenan is a generic term for several polysaccharides also extracted from seaweed. Carrageenan compounds differ from agar in that they have sulfate groups (-OSO3-) in place of some hydroxyl groups. Carrageenan is also used for thickening, suspending, and gelling food products.

Agarobiose is the repeating disaccharide unit in agar.   

Alginic acid, Alginates
Alginate is extracted from seaweeds, such as giant kelp (Macrocystis pyrifera). The chemical constituents of alginate are random sequences of chains of β-D-mannuronic and α-L-guluronic acids attached with 1→4 linkages. Alginates are insoluble in water, but absorb water readily. They are useful as gelling and thickening agents. Alginates are used in the manufacture of textiles, paper, and cosmetics. The sodium salt of alginic acid, sodium alginate, is used in the food industry to increase viscosity and as an emulsifier. Alginates are found in food products such as ice cream and in slimming aids where they serve as appetite suppresants. In dentistry, alginates are used to make dental impressions.
 
Alginic acid

Galactomannan
Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups. The mannopyranose units are linked with 1β→4 linkages to which galactopyranose units are attached with 1α→6 linkages. Galactomannans are present in several vegetable gums that are used to increase the viscosity of food products. These are the approximate ratios of mannose to galactose for the following gums:
•    Fenugreek gum, mannose:galactose 1:1
•    Guar gum, mannose:galactose 2:1
•    Tara gum, mannose:galactose 3:1
•    Locust bean gum or Carob gum, mannose:galactose 4:1
Guar is a legume that has been traditionally cultivated as livestock feed. Guar gum is also known by the namecyamopsis tetragonoloba which is the Latin taxonomy for the guar bean or cluster bean. Guar gum is the ground endosperm of the seeds. Approximately 85% of guar gum is guaran, a water soluble polysaccharide consisting of linear chains of mannose with 1β→4 linkages to which galactose units are attached with 1α→6 linkages. The ratio of mannose to galactose is 2:1. Guar gum has five to eight times the thickening power of starch and has many uses in the pharmaceutical industry, as a food stabilizer, and as a source of dietary fiber.

Guaran is the principal polysaccharide in guar gum.    

Pectin
Pectin is a polysaccharide that acts as a cementing material in the cell walls of all plant tissues. The white portion of the rind of lemons and oranges contains approximately 30% pectin. Pectin is the methylated ester of polygalacturonic acid, which consists of chains of 300 to 1000 galacturonic acid units joined with 1α→4 linkages. The Degree of Esterification (DE) affects the gelling properties of pectin. The structure shown here has three methyl ester forms (-COOCH3) for every two carboxyl groups (-COOH), hence it is has a 60% degree of esterification, normally called a DE-60 pectin. Pectin is an important ingredient of fruit preserves, jellies, and jams.

Pectin is a polymer of α-Galacturonic acid with a variable number of methyl ester groups.    
Xanthan Gum
Xanthan gum is a polysaccharide with a β-D-glucose backbone like cellulose, but every second glucose unit is attached to a trisaccharide consisting of mannose, glucuronic acid, and mannose. The mannose closest to the backbone has an acetic acid ester on carbon 6, and the mannose at the end of the trisaccharide is linked through carbons 6 and 4 to the second carbon of pyruvic acid. Xanthan Gum is produced by the bacterium Xanthomonas campestris, which is found on cruciferous vegetables such as cabbage and cauliflower. The negatively charged carboxyl groups on the side chains cause the molecules to form very viscous fluids when mixed with water. Xanthan gum is used as a thickener for sauces, to prevent ice crystal formation in ice cream, and as a low-calorie substitute for fat. Xanthan gum is frequently mixed with guar gum because the viscosity of the combination is greater than when either one is used alone.

The repeating unit of Xanthan Gum   
Glucomannan
Glucomannan is a dietary fiber obtained from tubers of Amorphophallus konjac cultivated in Asia. Flour from the konjac tubers is used to make Japanese shirataki noodles, also called konnyaku noodles, which are very low in calories. Glucomannan is used as a hunger suppressant because it produces a feeling of fullness by creating very viscous solutions that retard absorption of the nutrients in food. One gram of this soluble polysaccharide can absorb up to 200 ml of water, so it is also used for absorbent articles such as disposable diapers and sanitary napkins. The polysaccharide consists of glucose (G) and mannose (M) in a proportion of 5:8 joined by 1β→4 linkages. The basic polymeric repeating unit has the pattern: GGMMGMMMMMGGM. Short side chains of 11-16 monosaccharides occur at intervals of 50-60 units of the main chain attached by 1β→3 linkages. Also, acetate groups on carbon 6 occur at every 9-19 units of the main chain. Hydrolysis of the acetate groups favors the formation of intermolecular hydrogen bonds that are responsible for the gelling action.

A portion (GGMM) of the glucomannan repeating unit.
The second glucose has an acetate group.

2.2 Lipids

2.2.1 Classification of lipids

A.    Simple Lipids
1.    Triglycerides, neutral fats: Found in adipose tissue, butterfat, lard, suet, fish oils, olive oil, corn oil, etc. Esters of three molecules of fatty acids plus one molecule of glycerol; the fatty acid may all be different.
2.    Waxes: beeswax, head oil of sperm whale, cerumen, carnauba oil, and lanolin. Composed of esters of fatty acids with alcohol other than glycerol; of industrial and medicinal importance.
B.    Compound Lipids
1.    Phospholipids (phosphatides): Found chiefly in animal tissues. Substituted fats, consisting of phosphatidic acid; composed of glycerol, fatty acids, and phosphoric acid bound in ester linkage to a nitrogenous base.
2.    Lecithin: Found in brain, egg yolk, and organ meats. Phosphatidyl choline or serine; phosphatide linked to choline; a lipotropic agent; important in fat metabolism and transport; used as emulsifying agent in the food industry.
3.    Cephalin: Occurs predominantly in nervous tissue. Phosphatidyl ethanolamine; phosphatide linage to serine or ethanolamine; plays a role in blood clotting.
4.    Plasmalogen: Found in brain, heart, and muscle. Phosphatidal ethanolamine or choline; phosphatide containing an aliphatic aldehyde.
5.    Lipositol: Found in brain, heart, kidneys, and plant tissues together with phytic acid. Phosphatidyl inositol; phosphatide linked to inositol; rapid synthesis and degradation in brain; evidence for role in cell transport processes.
6.    Sphingomyelin: Found in nervous tissue, brain, and red blood cells. Sphingosine-containing phosphatide; yields fatty acids, choline, sphingosine, phosphoric acid, and no glycerol; source of phosphoric acid in body tissue.
7.    Glycolipids:
a.    Cerebroside: myline sheaths of nerves, brain, and other tissues. Yields on hydrolysis of fatty acids, sphingosine, galactose (or glucose), but not fatty acids; includes kerasin and phrenosin.
b.    Ganglioside: brain, nerve tissue, and other selected tissues, notably spleen; contains a ceramide linked to hexose (glucose or galactose), neuraminic acid, sphingosine, and fatty acids.
c.    Sulfolipid: white matter of brain, liver, and testicle; also plant chloroplast. Sulfur-containing glycolipid; sulfate present in ester linkage to galactose.
d.    Proteolipids: brain and nerve tissue. Complexes of protein and lipids having solubility properties of lipids.
C.    Terpenoids and Steroids
1.    Terpenes: Found in essential oils, resin acids, rubber, plant pigments such as caotenese and lycopenes, Vitamin A, and camphor. Large group of compounds made up of repeating isoprene units; Vitamin A of nutritional interest; fat soluble Vitamin E and K, which are also related chemically to terpenes.
2.    Sterols:
a.    Cholesterol: found in egg yolk, dairy products, and animal tissues. A constituent of bile acids and a precursor of Vitamin D.
b.    Ergosterol: found in plant tissues, yeast, and fungi. Converted to Vitamin D2 on irradiation.
c.    7-dehydrocholesterol: found in animal tissues and underneath skin. Converted to D3 on irradiation.
3.    Androgens and estrogens: (Sex hormones) Found in ovaries and testes.
4.    Adrenal corticolsteroids: adrenal cortex, blood.
D.    Derived lipids
1.    Fatty acids: occur in plant and animal foods; also exhibit in complex forms with other substances. Obtained from hydrolysis of fats; usually contains an even number of carbon atoms and are straight chain derivatives.
Classification of fatty acids is based on the length of the carbon chain (short, medium, or long); the number of double bonds (unsaturated, mono-, or polyunsaturated); or essentiality in the diet (essential or non-essential). A current designation is based on the position of the endmost double bond, counting from the methyl (CH3) carbon, called the omega end. The most important omega fatty acids are: Omega 6 - linolein and arachidonic acids and Omega 3 - linolenic, eicosapentaenoic, and docosahexaenoic acids.
2.2.2    Properties of lipids
For an understanding of the place of fats and oils in the diet and in the arts, some elementary knowledge of their chemical and physical properties is essential.
Chemical Composition
As already stated, fats may be decomposed into glycerin and fatty acids. This manner of decomposition takes place only in the presence of moisture. For each molecule (a molecule is the smallest particle of a substance that can exist and still exhibit the properties of that substance) of glycerin set free there are set free three molecules of fatty acid. In the process three molecules of water are taken up, partly to help re-form the glycerin and partly to help re-form the fatty acids. Conversely (in the laboratory) the fat may be reconstituted from glycerin and fatty acid, in which event three molecules of water are set free for each molecule of fat synthesized.
The process of splitting a substance whereby water is taken up is known to chemists as hydrolysis, a word which is merely Greek for cleavage by water. The process is often termed saponification, since it was first observed to take place in the manufacture of soap. The term saponification (instead of the more exact term hydrolysis) is, however, applied indiscriminately and inappropriately to any chemical change of this nature, whether or not soap is formed. Nowadays in industry fats are very often converted into glycerin and fatty acids -- that is, hydrolyzed -- without the formation of any soap whatever. Soap is merely the combination of a fatty acid with a metal, i.e., it is a salt. The commonest soaps are the fatty-acid salts of sodium (sodium is a soft, white metal obtained from common salt, sodium chloride) and potassium. Hard soaps are sodium salts; soft soaps, potassium salts. The fatty-acid salts of ammonium are also sometimes used for cleansing. Only a few other soaps are of practical importance, for example lead soaps which are used in medicinal plasters, zinc soaps which are used in ointments, and aluminum soaps which are used in waterproofing. Very few of the salts of fatty acids have the properties of common soap. Most of them are but slightly soluble in water, and therefore do not yield suds and have little or no detergent (i.e., cleansing) action. All are nevertheless termed soaps by chemists.
Triglycerids and Fatty Acids
As above stated, fats may be split into glycerin and fatty acids, the resulting mixture containing three molecules of fatty acid for each molecule of glycerin. Because of this proportion of acid to glycerin, the chemical compounds found in the fat before it was split are known to chemists as triglycerides. Since there are a number of different fatty acids that occur in natural fats, a great many different triglycerides are encountered in nature. These are named according to the fatty acid or acids they contain. Thus triolein is the triglycerid of oleic acid, tripalmitin that of palmitic acid, tristearin that of stearic acid, while monopalmitin-distearin contains, as the name indicates, one molecule of palmitic and two of stearic acid. While a large variety of fatty acids is found in natural fats and oils, only a few of them are of outstanding commercial importance. These are myristic acid, lauric acid, palmitic acid, stearic acid, oleic acid, linolic acid, and linolenic acid.
The formulas of these acids (disregarding isomers) are as follows:
Acid    Elementary Formula    Constitutional Formula
Lauric    C12H24O2    CH3(CH2)10COOH
Myristic    C14H28O2    CH3(CH2)12COOH
Palmitic    C16H32O2    CH3(CH2)14COOH
Stearic    C18H36O2    CH3(CH2)16COOH
Oleic    C18H34O2    CH3(CH2)14(CH)2COOH
Linolic    C18H32O2    CH3(CH2)12(CH)4COOH
Linolenic    C18H30O2    CH3(CH2)10(CH)6COOH
Fats and oils being mere mechanical mixtures of triglycerids, it is possible in many cases to separate them more or less completely into their component triglycerids by simple mechanical means, chilling and pressure. Such processes have considerable commercial importance, as, for example, the separation of lard into lard oil and lard stearin or of beef tallow into oleo oil and oleostearin
The viscosity of a fat is a property of commercial significance, especially to manufacturers of lubricants. It is usually estimated by comparing the length of time it takes a given volume of oil (or melted fat) to flow through a tube of small bore, or through a small orifice, with the time it takes an identical volume of water. Castor oil has the highest viscosity of any fat that is fluid at ordinary temperatures. Olive oil has the highest viscosity of any of the common vegetable oils. The viscosities vary greatly with the temperature. When fats are cooled to the solidifying point they can no longer be said to be viscous. They have become plastic.
2.2.3    Saturated and unsaturated fatty acids
Fatty acids can be saturated and unsaturated, depending on double bonds. They differ in length as well.
Unsaturated fatty acids


Comparison of the trans isomer (top and the cis-isomer oleic acid.
Unsaturated fatty acids resemble saturated fatty acids, except that the chain has one or more double-bonds.  The two carbon atoms in the chain that are bound next to either side of the double bond can occur in a cis or trans configuration.
cis
A cis configuration means that adjacent hydrogen atoms are on the same side of the double bond. The rigidity of the double bond freezes its conformation and, in the case of the cis isomer, causes the chain to bend and restricts the conformational freedom of the fatty acid. The more double bonds the chain has in the cis configuration, the less flexibility it has. When a chain has many cis bonds, it becomes quite curved in its most accessible conformations. For example, oleic acid, with one double bond, has a "kink" in it, whereas linoleic acid, with two double bonds, has a more pronounced bend. Alpha-linolenic acid, with three double bonds, favors a hooked shape. The effect of this is that, in restricted environments, such as when fatty acids are part of a phospholipid in a lipid bilayer, or triglycerides in lipid droplets, cis bonds limit the ability of fatty acids to be closely packed, and therefore could affect the melting temperature of the membrane or of the fat.
trans
A trans configuration, by contrast, means that the next two hydrogen atoms are bound to opposite sides of the double bond. As a result, they do not cause the chain to bend much, and their shape is similar to straight saturated fatty acids.
In most naturally occurring unsaturated fatty acids, each double bond has three n carbon atoms after it, for some n, and all are cis bonds. Most fatty acids in the trans configuration (trans fats) are not found in nature and are the result of human processing (e.g., hydrogenation).
The differences in geometry between the various types of unsaturated fatty acids, as well as between saturated and unsaturated fatty acids, play an important role in biological processes, and in the construction of biological structures (such as cell membranes).
Saturated fatty acids
Saturated fatty acids are long-chain carboxylic acids that usually have between 12 and 24 carbon atoms and have no double bonds. Thus, saturated fatty acids are saturated with hydrogen (since double bonds reduce the number of hydrogens on each carbon). Because saturated fatty acids have only single bonds, each carbon atom within the chain has 2 hydrogen atoms (except for the omega carbon at the end that has 3 hydrogens).
example; 1) Lauric acid (12 C) 2) Myristic acid (14 C) 3) Palmitic acid (16 C) 4) Stearic acid ( 18 C) 5) Arachidic acid (20 C)
Essential fatty acids
The human body can produce all but two of the fatty acids it needs. These two, linoleic acid (LA) and alpha-linolenic acid (ALA) are widely distributed in plant oils. In addition, fish, flax, and hemp oils contain the longer-chain omega-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Other marine oils, such as from seal, also contain significant amounts of docosapentaenoic acid (DPA), which is also an omega-3 fatty acid. Although the body to some extent can convert ALA into these longer-chain omega-3 fatty acids, the omega-3 fatty acids found in marine oils help fulfill the requirement of essential fatty acids (and have been shown to have wholesome properties of their own).
Since they cannot be made in the body from other substrates and must be supplied in food, they are called essential fatty acids. Mammals lack the ability to introduce double bonds in fatty acids beyond carbons 9 and 10. Hence linoleic acid and alpha-linolenic acid are essential fatty acids for humans.
In the body, essential fatty acids are primarily used to produce hormone-like substances that regulate a wide range of functions, including blood pressure, blood clotting, blood lipid levels, the immune response, and the inflammation response to injury infection.
Essential fatty acids are polyunsaturated fatty acids and are the parent compounds of the omega-6 and omega-3 fatty acid series, respectively. They are essential in the human diet because there is no synthetic mechanism for them. Humans can easily make saturated fatty acids or monounsaturated fatty acids with a double bond at the omega-9 position, but do not have the enzymes necessary to introduce a double bond at the omega-3 position or omega-6 position.
The essential fatty acids are important in several human body systems, including the immune system and in blood pressure regulation, since they are used to make compounds such as prostaglandins. The brain has increased amounts of linoleic and alpha-linolenic acid derivatives. Changes in the levels and balance of these fatty acids due to a typical Western diet rich in omega-6 and poor in omega-3 fatty acids is alleged  to be associated with depression and behavioral change, including violence. The actual connection, if any, is still under investigation. Further, changing to a diet richer in omega-3 fatty acids, or consumption of supplements to compensate for a dietary imbalance, has been associated with reduced violent behavior and increased attention span, but the mechanisms for the effect are still unclear. So far, at least three human studies have shown results that support this: two school studies as well as a double blind study in a prison.  Fatty acids play an important role in the life and death of cardiac cells because they are essential fuels for mechanical and electrical activities of the heart.
Trans fatty acids
A trans fatty acid (commonly shortened to trans fat) is an unsaturated fatty acid molecule that contains a trans double bond between carbon atoms, which makes the molecule less 'kinked' in comparison to fatty acids with cis double bonds. These bonds are characteristically produced during industrial hydrogenation of vegetable oils. Since they are also produced in bacterial metabolism, ruminant fats (e.g. in milk) also contain about 4% trans fatty acids. Research suggests that amounts of trans fats correlate with circulatory diseases such as atherosclerosis and coronary heart disease more than the same amount of cis fats, for reasons that are not fully understood. It is known, however, that trans fats, just like saturated fats, raise the LDL ("bad") cholesterol and lowers the HDL ("good") cholesterol. They have also been shown to have other harmful effects such as increasing triglycerides and lipoproteins. They are also thought to cause more inflammation, which is thought to occur through damage to the cells lining of blood vessels.
2.2.4    Cholesterol
Cholesterol is a waxy steroid metabolite found in the cell membranes and transported in the blood plasma of all animals. It is an essential structural component of mammalian cell membranes, where it is required to establish proper membrane permeability and fluidity. In addition, cholesterol is an important component for the manufacture of bile acids, steroid hormones, and fat-soluble vitamins including Vitamin A, Vitamin D, Vitamin E and Vitamin K. Cholesterol is the principal sterol synthesized by animals, but small quantities are synthesized in other eukaryotes, such as plants and fungi. It is almost completely absent among prokaryotes, which include bacteria. Although cholesterol is an important and necessary molecule for animals, a high level of serum cholesterol is an indicator for diseases such as heart disease.  François Poulletier de la Salle first identified cholesterol in solid form in gallstones, in 1769. However, it was only in 1815 that chemist Eugène Chevreul named the compound "cholesterine".
Dietary sources
Animal fats are complex mixtures of triglycerides, with lesser amounts of phospholipids and cholesterol. As a consequence, all foods containing animal fat contain cholesterol to varying extents. Major dietary sources of cholesterol include cheese, egg yolks, beef, pork, poultry, and shrimp. Human breast milk also contains significant quantities of cholesterol.
The amount of cholesterol is present in plant-based food sources is generally much lower than animal based sources. In addition, plant products such as flax seeds and peanuts contain cholesterol-like compounds called phytosterols, which are suggested to help lower serum cholesterol levels.
The view that a change in diet (to be specific, a reduction in dietary fat and cholesterol) can lower blood cholesterol levels, and thus reduce the likelihood of development of, among others, coronary artery disease (CHD) has been challenged. An alternative view is that any reductions to dietary cholesterol intake are counteracted by the organs such as the liver, which will increase or decrease production of cholesterol to keep blood cholesterol levels constant.  Another view is that although saturated fat and dietary cholesterol also raise blood cholesterol, these nutrients are not as effective at doing this as is animal protein.
Synthesis
About 20–25% of total daily cholesterol production occurs in the liver; other sites of high synthesis rates include the intestines, adrenal glands, and reproductive organs. Synthesis within the body starts with one molecule of acetyl CoA and one molecule of acetoacetyl-CoA, which are dehydrated to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA). This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. This step is the regulated, rate-limiting and irreversible step in cholesterol synthesis and is the site of action for the statin drugs (HMG-CoA reductase competitive inhibitors).
Mevalonate is then converted to 3-isopentenyl pyrophosphate in three reactions that require ATP. This molecule is decarboxylated to isopentenyl pyrophosphate, which is a key metabolite for various biological reactions. Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase. Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum. Oxidosqualene cyclase then cyclizes squalene to form lanosterol. Finally, lanosterol is then converted to cholesterol.
Metabolism, recycling and excretion
Cholesterol is oxidized by the liver into a variety of bile acids. These in turn are conjugated with glycine, taurine, glucuronic acid, orsulfate. A mixture of conjugated and non-conjugated bile acids along with cholesterol itself is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines and the remainder lost in the feces.[26] The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones, although lecithin and bilirubin gallstones also occur less frequently.
Function
Cholesterol is required to build and maintain membranes; it regulates membrane fluidity over the range of physiological temperatures. The hydroxyl group on cholesterol interacts with the polar head groups of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty acid chain of the other lipids. In this structural role, cholesterol reduces the permeability of the plasma membrane to protons (positive hydrogen ions) and sodium ions.
Within the cell membrane, cholesterol also functions in intracellular transport, cell signaling and nerve conduction. Cholesterol is essential for the structure and function of invaginated caveolae and clathrin-coated pits, including caveola-dependent and clathrin-dependent endocytosis. The role of cholesterol in such endocytosis can be investigated by using methyl beta cyclodextrin (MβCD) to remove cholesterol from the plasma membrane. Recently, cholesterol has also been implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane. In many neurons, a myelin sheath, rich in cholesterol, since it is derived from compacted layers of Schwann cell membrane, provides insulation for more efficient conduction of impulses.
Within cells, cholesterol is the precursor molecule in several biochemical pathways. In the liver, cholesterol is converted to bile, which is then stored in the gallbladder. Bile contains bile salts, which solubilize fats in the digestive tract and aid in the intestinal absorption of fat molecules as well as the fat-soluble vitamins, Vitamin A, Vitamin D, Vitamin E, and Vitamin K. Cholesterol is an important precursor molecule for the synthesis of Vitamin D and the steroid hormones, including the adrenal gland  hormones cortisol and aldosterone as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives.  Some research indicates that cholesterol may act as an antioxidant.
*****************
UNIT – III
3.1 Enzymes
3.1.1 Classification
The first Enzyme Commission gave much thought to the question of a systematic and logical nomenclature for enzymes, and finally recommended that there should be two nomenclatures for enzymes, one systematic and one working or trivial. The systematic name of an enzyme, formed in accordance with definite rules, showed the action of an enzyme as exactly as possible, thus identifying the enzyme precisely. The trivial name was sufficiently short for general use, but not necessarily very systematic; in a great many cases it was a name already in current use. The introduction of (often cumbersome) systematic names was strongly criticised. In many cases the reaction catalysed is not much longer than the systematic name and can serve just as well for identification, especially in conjunction with the code number.
The Commission for Revision of Enzyme Nomenclature discussed this problem at length, and a change in emphasis was made. It was decided to give the trivial names more prominence in the Enzyme List; they now follow immediately after the code number, and are described as Common Name. Also, in the index the common names are indicated by an asterisk. Nevertheless, it was decided to retain the systematic names as the basis for classification for the following reasons:
(i) the code number alone is only useful for identification of an enzyme when a copy of the Enzyme List is at hand, whereas the systematic name is self-explanatory;
(ii) the systematic name stresses the type of reaction, the reaction equation does not;
(iii) systematic names can be formed for new enzymes by the discoverer, by application of the rules, but code numbers should not be assigned by individuals;
(iv) common names for new enzymes are frequently formed as a condensed version of the systematic name; therefore, the systematic names are helpful in finding common names that are in accordance with the general pattern.
The Enzyme List contains one or more references for each enzyme. It should be stressed that no attempt has been made to provide a complete bibliography, or to refer to the first description of an enzyme. The references are intended to provide sufficient evidence for the existence of an enzyme catalysing the reaction as set out. Where there is a major paper describing the purification and specificity of an enzyme, or a major review article, this has been quoted to the exclusion of earlier and later papers. In some cases separate references are given for animal, plant and bacterial enzymes.
Scheme for the classification of enzymes and the generation of EC numbers
The first Enzyme Commission, in its report in 1961, devised a system for classification of enzymes that also serves as a basis for assigning code numbers to them. These code numbers, prefixed by EC, which are now widely in use, contain four elements separated by points, with the following meaning:
(i) the first number shows to which of the six main divisions (classes) the enzyme belongs,
(ii) the second figure indicates the subclass,
(iii) the third figure gives the sub-subclass,
(iv) the fourth figure is the serial number of the enzyme in its sub-subclass.
The subclasses and sub-subclasses are formed according to principles indicated below.
The main divisions and subclasses are:
Class 1. Oxidoreductases
To this class belong all enzymes catalysing oxidoreduction reactions. The substrate that is oxidized is regarded as hydrogen donor. The systematic name is based on donor: acceptor oxidoreductase. The common name will be dehydrogenase, wherever this is possible; as an alternative, reductase can be used. Oxidase is only used in cases where O2 is the acceptor.
The second figure in the code number of the oxidoreductases, unless it is 11, 13, 14 or 15, indicates the group in the hydrogen (or electron) donor that undergoes oxidation: 1 denotes a -CHOH- group, 2 a -CHO or -CO-COOH group or carbon monoxide, and so on, as listed in the key.
The third figure, except in subclasses EC 1.11, EC 1.13, EC 1.14 and EC 1.15, indicates the type of acceptor involved: 1 denotes NAD(P)+, 2 a cytochrome, 3 molecular oxygen, 4 a disulfide, 5 a quinone or similar compound, 6 a nitrogenous group, 7 an iron-sulfur protein and 8 a flavin. In subclasses EC 1.13 and EC 1.14 a different classification scheme is used and sub-subclasses are numbered from 11 onwards.
It should be noted that in reactions with a nicotinamide coenzyme this is always regarded as acceptor, even if this direction of the reaction is not readily demonstrated. The only exception is the subclass EC 1.6, in which NAD(P)H is the donor; some other redox catalyst is the acceptor.
Although not used as a criterion for classification, the two hydrogen atoms at carbon-4 of the dihydropyridine ring of nicotinamide nucleotides are not equivalent in that the hydrogen is transferred stereo specifically.
Class 2. Transferases
Transferases are enzymes transferring a group, e.g. a methyl group or a glycosyl group, from one compound (generally regarded as donor) to another compound (generally regarded as acceptor). The systematic names are formed according to the scheme donor: acceptor group transferase. The common names are normally formed according to acceptor group transferase or donor group transferase. In many cases, the donor is a cofactor (coenzyme) charged with the group to be transferred. Some transferase reactions can be viewed in different ways. For example, the enzyme-catalysed reaction X-Y + Z = X + Z-Y   may be regarded either as a transfer of the group Y from X to Z, or as a breaking of the X-Y bond by the introduction of Z. Where Z represents phosphate or arsenate, the process is often spoken of as 'phosphorolysis' or 'arsenolysis', respectively, and a number of enzyme names based on the pattern of phosphorylase have come into use. These names are not suitable for a systematic nomenclature, because there is no reason to single out these particular enzymes from the other transferases, and it is better to regard them simply as Y-transferases.
In the above reaction, the group transferred is usually exchanged, at least formally, for hydrogen, so that the equation could more strictly be written as:
X-Y + Z-H = X-H + Z-Y.
Another problem is posed in enzyme-catalysed transaminations, where the -NH2 group and -H is transferred to a compound containing a carbonyl group in exchange for the =O of that group, according to the general equation:
R1-CH(-NH2)-R2 + R3-CO-R4   R1-CO-R2 + R3-CH(-NH2)-R4.
The reaction can be considered formally as oxidative deamination of the donor (e.g. amino acid) linked with reductive amination of the acceptor (e.g. oxo acid), and the transaminating enzymes (pyridoxal-phosphate proteins) might be classified as oxidoreductases. However, the unique distinctive feature of the reaction is the transfer of the amino group (by a well-established mechanism involving covalent substrate-coenzyme intermediates), which justified allocation of these enzymes among the transferases as a special subclass (EC 2.6.1, transaminases).
The second figure in the code number of transferases indicates the group transferred; a one-carbon group in EC 2.1, an aldehydic or ketonic group in EC 2.2, an acyl group in EC 2.3 and so on.
The third figure gives further information on the group transferred; e.g. subclass EC 2.1 is subdivided into methyltransferases (EC 2.1.1), hydroxymethyl- and formyltransferases (EC 2.1.2) and so on; only in subclass EC 2.7, does the third figure indicate the nature of the acceptor group.
Class 3. Hydrolases
These enzymes catalyse the hydrolytic cleavage of C-O, C-N, C-C and some other bonds, including phosphoric anhydride bonds. Although the systematic name always includes hydrolase, the common name is, in many cases, formed by the name of the substrate with the suffix -ase. It is understood that the name of the substrate with this suffix means a hydrolytic enzyme.
A number of hydrolases acting on ester, glycosyl, peptide, amide or other bonds are known to catalyse not only hydrolytic removal of a particular group from their substrates, but likewise the transfer of this group to suitable acceptor molecules. In principle, all hydrolytic enzymes might be classified as transferases, since hydrolysis itself can be regarded as transfer of a specific group to water as the acceptor. Yet, in most cases, the reaction with water as the acceptor was discovered earlier and is considered as the main physiological function of the enzyme. This is why such enzymes are classified as hydrolases rather than as transferases.
Some hydrolases (especially some of the esterases and glycosidases) pose problems because they have a very wide specificity and it is not easy to decide if two preparations described by different authors (perhaps from different sources) have the same catalytic properties, or if they should be listed under separate entries. An example is vitamin A esterase (formerly EC 3.1.1.12, now believed to be identical with EC 3.1.1.1). To some extent the choice must be arbitrary; however, separate entries should be given only when the specificities are sufficiently different.
Another problem is that proteinases have 'esterolytic' action; they usually hydrolyse ester bonds in appropriate substrates even more rapidly than natural peptide bonds. In this case, classification among the peptide hydrolases is based on historical priority and presumed physiological function.
The second figure in the code number of the hydrolases indicates the nature of the bond hydrolysed; EC 3.1 are the esterases; EC 3.2 the glycosylases, and so on.
The third figure normally specifies the nature of the substrate, e.g. in the esterases the carboxylic ester hydrolases (EC 3.1.1), thiolester hydrolases (EC 3.1.2), phosphoric monoester hydrolases (EC 3.1.3); in the glycosylases the O-glycosidases (EC 3.2.1), N-glycosylases (EC 3.2.2), etc. Exceptionally, in the case of the peptidyl-peptide hydrolases the third figure is based on the catalytic mechanism as shown by active centre studies or the effect of pH.
Class 4. Lyases
Lyases are enzymes cleaving C-C, C-O, C-N, and other bonds by elimination, leaving double bonds or rings, or conversely adding groups to double bonds. The systematic name is formed according to the pattern substrate group-lyase. The hyphen is an important part of the name, and to avoid confusion should not be omitted, e.g. hydro-lyase not 'hydrolyase'. In the common names, expressions like decarboxylase, aldolase, dehydratase (in case of elimination of CO2, aldehyde, or water) are used. In cases where the reverse reaction is much more important, or the only one demonstrated, synthase (not synthetase) may be used in the name. Various subclasses of the lyases include pyridoxal-phosphate enzymes that catalyse the elimination of a b- or g-substituent from an a-amino acid followed by a replacement of this substituent by some other group. In the overall replacement reaction, no unsaturated end-product is formed; therefore, these enzymes might formally be classified as alkyl-transferases (EC 2.5.1...). However, there is ample evidence that the replacement is a two-step reaction involving the transient formation of enzyme-bound a, b (or b, g)-unsaturated amino acids. According to the rule that the first reaction is indicative for classification, these enzymes are correctly classified as lyases. Examples are tryptophan synthase (EC 4.2.1.20) and cystathionine b-synthase (EC 4.2.1.22).
The second figure in the code number indicates the bond broken: EC 4.1 is carbon-carbon lyases, EC 4.2 carbon-oxygen lyases and so on.
The third figure gives further information on the group eliminated (e.g. CO2 in EC 4.1.1, H2O in EC 4.2.1).
Class 5. Isomerases
These enzymes catalyse geometric or structural changes within one molecule. According to the type of isomerism, they may be called racemases, epimerases, cis-trans-isomerases, isomerases, tautomerases, mutases or cycloisomerases.
In some cases, the inter conversion in the substrate is brought about by an intra molecular oxidoreduction (EC 5.3); since hydrogen donor and acceptor are the same molecule, and no oxidized product appears, they are not classified as oxidoreductases, even though they may contain firmly bound NAD(P)+.
The subclasses are formed according to the type of isomerism, the sub-subclasses to the type of substrates.
Class 6. Ligases
Ligases are enzymes catalysing the joining together of two molecules coupled with the hydrolysis of a diphosphate bond in ATP or a similar triphosphate. The systematic names are formed on the system X:Y ligase (ADP-forming). In earlier editions of the list the term synthetase has been used for the common names. Many authors have been confused by the use of the terms synthetase (used only for Group 6) and synthase (used throughout the list when it is desired to emphasis the synthetic nature of the reaction). Consequently NC-IUB decided in 1983 to abandon the use of synthetase for common names, and to replace them with names of the type X-Y ligase. In a few cases in Group 6, where the reaction is more complex or there is a common name for the product, a synthase name is used (e.g. EC 6.3.2.11 and EC 6.3.5.1).
It is recommended that if the term synthetase is used by authors, it should continue to be restricted to the ligase group.
The second figure in the code number indicates the bond formed: EC 6.1 for C-O bonds (enzymes acylating tRNA), EC 6.2 for C-S bonds (acyl-CoA derivatives), etc. Sub-subclasses are only in use in the C-N ligases.
In a few cases it is necessary to use the word other in the description of subclasses and sub-subclasses. They have been provisionally given the figure 99, in order to leave space for new subdivisions.
*********** 
3.1.2. Properties of enzymes
3.1.3. Mode of enzyme action
Enzymes are generally globular proteins and range from just 62 amino acid residues in size, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. A small number of RNA-based biological catalysts exist, with the most common being the ribosome; these are referred to as either RNA-enzymes or ribozymes. The activities of enzymes are determined by their three-dimensional structure. However, although structure does determine function, predicting a novel enzyme's activity just from its structure is a very difficult problem that has not yet been solved.
Most enzymes are much larger than the substrates they act on, and only a small portion of the enzyme (around 3–4 amino acids) is directly involved in catalysis. The region that contains these catalytic residues, binds the substrate, and then carries out the reaction is known as the active site. Enzymes can also contain sites that bind cofactors, which are needed for catalysis. Some enzymes also have binding sites for small molecules, which are often direct or indirect products or substrates of the reaction catalyzed. This binding can serve to increase or decrease the enzyme's activity, providing a means for feedback regulation.
Like all proteins, enzymes are long, linear chains of amino acids that fold to produce a three-dimensional product. Each unique amino acid sequence produces a specific structure, which has unique properties. Individual protein chains may sometimes group together to form a protein complex. Most enzymes can be denatured—that is, unfolded and inactivated—by heating or chemical denaturants, which disrupt the three-dimensional structure of the protein. Depending on the enzyme, denaturation may be reversible or irreversible. Structures of enzymes in complex with substrates or substrate analogs during a reaction may be obtained using Time resolved crystallography methods.
Specificity
Enzymes are usually very specific as to which reactions they catalyze and the substrates that are involved in these reactions. Complementary shape, charge and hydrophilic/hydrophobic characteristics of enzymes and substrates are responsible for this specificity. Enzymes can also show impressive levels of stereo specificity, regio selectivity and chemo selectivity.
Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. These enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.
Some enzymes that produce secondary metabolites are described as promiscuous, as they can act on a relatively broad range of different substrates. It has been suggested that this broad substrate specificity is important for the evolution of new biosynthetic pathways.
"Lock and key" model
Enzymes are very specific, and it was suggested by the Nobel laureate organic chemist Emil Fischer in 1894 that this was because both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another.[28] This is often referred to as "the lock and key" model. However, while this model explains enzyme specificity, it fails to explain the stabilization of the transition state that enzymes achieve.

Diagrams to show the induced fit hypothesis of enzyme action.

In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continually reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side chains which make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge is determined.
Mechanisms
Enzymes can act in several ways,
    Lowering the activation energy by creating an environment in which the transition state is stabilized (e.g. straining the shape of a substrate—by binding the transition-state conformation of the substrate/product molecules, the enzyme distorts the bound substrate(s) into their transition state form, thereby reducing the amount of energy required to complete the transition).
    Lowering the energy of the transition state, but without distorting the substrate, by creating an environment with the opposite charge distribution to that of the transition state.
    Providing an alternative pathway. For example, temporarily reacting with the substrate to form an intermediate ES complex, this would be impossible in the absence of the enzyme.
    Reducing the reaction entropy change by bringing substrates together in the correct orientation to react. Considering ΔH‡ alone overlooks this effect.
    Increases in temperatures speed up reactions. Thus, temperature increases help the enzyme function and develop the end product even faster. However, if heated too much, the enzyme’s shape deteriorates and only when the temperature comes back to normal does the enzyme regain its shape. Some enzymes like thermolabile enzymes work best at low temperatures.
Interestingly, this entropic effect involves destabilization of the ground state and its contribution to catalysis is relatively small.
Transition State Stabilization
The understanding of the origin of the reduction of ΔG‡ requires one to find out how the enzymes can stabilize its transition state more than the transition state of the uncatalyzed reaction. Apparently, the most effective way for reaching large stabilization is the use of electrostatic effects, in particular, by having a relatively fixed polar environment that is oriented toward the charge distribution of the transition state. Such an environment does not exist in the uncatalyzed reaction in water.
Dynamics and function
The internal dynamics of enzymes is linked to their mechanism of catalysis.[36][37][38] Internal dynamics are the movement of parts of the enzyme's structure, such as individual amino acid residues, a group of amino acids, or even an entire protein domain. These movements occur at various time-scales ranging from  seconds to seconds. Networks of protein residues throughout an enzyme's structure can contribute to catalysis through dynamic motions. Protein motions are vital to many enzymes, but whether small and fast vibrations, or larger and slower conformational movements are more important depends on the type of reaction involved. However, although these movements are important in binding and releasing substrates and products, it is not clear if protein movements help to accelerate the chemical steps in enzymatic reactions. These new insights also have implications in understanding allosteric effects and developing new drugs.
Allosteric modulation



Allosteric transition of an enzyme between R and T states, stabilized by an agonist, and inhibitor and a substrate.

Allosteric sites are sites on the enzyme that bind to molecules in the cellular environment. The sites form weak, non covalent bonds with these molecules, causing a change in the conformation of the enzyme. This change in conformation translates to the active site; this then affects the reaction rate of the enzyme. Allosteric interactions can both inhibit and activate enzymes and are a common way that enzymes are controlled in the body.
*************
3.1.4 Nucleic acids
3.1.4.1 DNA Structure and properties
     DNA is a long polymer made from repeating units called nucleotides. The DNA chain is 22 to 26 Angstroms wide (2.2 to 2.6 nanometres), and one nucleotide unit is 3.3 Å (0.33 nm) long. Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the largest human chromosome, chromosome number 1, is approximately 220 million base pairs long.
DNA, short for deoxyribonucleic acid is the molecule that contains the genetic code of all organisms. This includes animals, plants, and bacteria. It is also used by some viruses, which are not living organisms but use DNA to infect organisms. DNA is found in each cell in the organism and tells that cells what proteins to make. A cell's proteins determine its function. DNA is inherited by children from their parents. This is why children share traits with their parents, such as skin, hair and eye color. The DNA in a person is a combination of some of the DNA from each of his or her parents.
Structure of DNA


Chemical structure of DNA. The phosphate groups are yellow, the deoxyribonucleic sugars are orange, and the nitrogen bases are green, purple, pink and blue. The atoms shown are: P=phosphorus O=oxygen N=nitrogen H=hydrogen
      DNA is shaped like a double helix, which is like a ladder twisted into a spiral. Each "leg" of the "ladder" is a line of nucleotides. A nucleotide is a molecule made up of deoxyribose (a kind of sugar with 5 carbon atoms), a phosphate group (made of phosphorus and oxygen), and a nitrogenous base. DNA is made of four types of nitrogenous base:
•    adenine (A)/(a)
•    thymine (T)/(t)
•    cytosine (C)/(c)
•    guanine (G)/(g)
  The "rungs" of the DNA ladder are each made of two bases, one base coming from each "leg". The bases connect in the middle: 'A' pairs with 'T', and 'C' pairs with 'G'. The bases are held together by hydrogen bonds.
     Adenine (A) and thymine (T) can pair up because they make two hydrogen bonds, and cytosine (C) and guanine (G) pair up to make three hydrogen bonds. Although the bases are always in fixed pairs, the pairs can come in any order. This way, DNA can write "codes" out of the "letters" that are the bases. These "codes" contain the message that tells the cell what to do.
Grooves
Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell, but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
Base pairing
Each type of base on one strand forms a bond with just one type of base on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with A bonding only to T, and C bonding only to G. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can therefore be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. Indeed, this reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.
Sense and antisense
A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
Supercoiling
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternate DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms.
The first published reports of A-DNA X-ray diffraction patterns— and also B-DNA used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternate analysis was then proposed by Wilkins et al., 1953, for the ''in vivo'' B-DNA X-ray diffraction/scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, Watson and Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix. it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partially dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
Quadruplex structures
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stacks on top of each other, to form a stable ''G-quadruplex'' structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop. Branched DNA can be used in nanotechnology to construct geometric shapes.
************* 
3.1.4.2 DNA Synthesis
The purpose of DNA is to hold information for every trait and body part.  It also holds information on how to build new materials, such as proteins.
•    Proteins are built in the cytoplasm of the cell on an organelle called ribosome.
•    The information is passed from DNA through a series of steps until its on the ribosome.
The protein synthesis overview is:
•    DNA - mRNA - tRNA - amino acids - proteins
•    The first step is called transcription.
o    Transcription translates genetic information from DNA to RNA.
•    The second step is called translation.
o    Translation translates the genetic code into a series of amino acids.
Transcription is a portion of DNA selected to transcribe its information to RNA.
•    As DNA unzips and unwinds, information starts at the promoter (start codon).
•    RNA polymerase lines u RNA base pairs with their opposite DNA base pairs (transcribes the code).
•    Instead of using Thymine, RNA has Uracil (U).
•    This happens in the nucleus.
In translation mRNA moves from the nucleus to the cytoplasm.  mRNA attaches to a ribosome.
•    Codons (three base pairs) are read to determine an amino acid.
•    Each codon codes for a different amino acid.
•    Translation starts with a start codon and ends with a stop codon.
•    Ribosomes use tRNA to translate the message of the mRNA.
•    tRNA has specific anti-codons on one end and the corresponding amino acid on the other end.
o    Anti-codons blind the codons and the attached amino acid joins a chain.
Step 1: DNA Transcription
Protein synthesis begins in the cell's nucleus when the gene encoding a protein is copied into RNA. Genes, in the form of DNA, are embedded in the cell's chromosomes. The process of transferring the gene's DNA into RNA is called transcription. Transcription helps to magnify the amount of DNA by creating many copies of RNA that can act as the template for protein synthesis. The RNA copy of the gene is called the mRNA.
DNA and RNA are both constructed by a chain of nucleotides. However, RNA differs from DNA by the substitution of Uracil (U) for thymine (T). Also, because only one strand of mRNA is needed when synthesizing proteins, mRNA naturally exists in single-stranded forms.


 Step 2: RNA Translation
After the mRNA has been transported to the rough endoplasmic reticulum, it is fed into the ribosomal translation machineries. Ribosomes begins to read the mRNA sequence from the 5` end to the 3` end. To convert the mRNA into protein, tRNA is used to read the mRNA sequence, 3 nucleotides at a time.
Amino acids are represented by codons, which are 3-nucleotide RNA sequences. The mRNA sequence is matched three nucleotides at a time to a complementary set of three nucleotides in the anticodon region of the corresponding tRNA molecule. Opposite the anticodon region of each tRNA, an amino acid is attached and as the mRNA is read off, the amino acids on each tRNA are joined together through peptide bonds



DNA Replication.
The process of making an identical copy of a section of duplex (double-stranded) DNA, using existing DNA as a template for the synthesis of new DNA strands. In humans and other eukaryotes, replication occurs in the cell nucleus.  Before a cell can divide, it must duplicate its entire DNA. In eukaryotes, this occurs during S phase of the cell cycle.

The Biochemical Reactions
DNA replication begins with the "unzipping" of the parent molecule as the hydrogen bonds between the base pairs are broken.  Once exposed, the sequence of bases on each of the separated strands serves as a template to guide the insertion of a complementary set of bases on the strand being synthesized. The new strands are assembled from deoxynucleotide triphosphate. Each incoming nucleotide is covalently linked to the "free" 3' carbon atom on the pentose as the second and third phosphates are removed together as a molecule of pyrophosphate (PPi). The nucleotides are assembled in the order that complements the order of bases on the strand serving as the template. Thus each C on the template guides the insertion of a G on the new strand, each G a C, and so on. When the process is complete, two DNA molecules have been formed identical to each other and to he parent molecule. 

The Enzymes
A portion of the double helix is unwound by a helicase. A molecule of a DNA polymerase binds to one strand of the DNA and begins moving along it in the 3' to 5' direction, using it as a template for assembling a leading strand of nucleotides and reforming a double helix. In eukaryotes, this molecule is called DNA polymerase delta (δ).

Because DNA synthesis can only occur 5' to 3', a molecule of a second type of DNA polymerase (epsilon, ε, in eukaryotes) binds to the other template strand as the double helix opens. This molecule must synthesize discontinuous segments of polynucleotide (called Okazaki fragments). Another enzyme, DNA ligase I then stitches these together into the lagging strand.

DNA Replication is Semi conservative

When the replication process is complete, two DNA molecules — identical to each other and identical to the original — have been produced. Each strand of the original molecule has remained intact as it served as the template for the synthesis of a complementary strand.

This mode of replication is described as semi-conservative: one-half of each new molecule of DNA is old; one-half new. Watson and Crick had suggested that this was the way the DNA would turn out to be replicated. Proof of the model came from the experiments of Meselson and Stahl.

Steps of DNA Replication
1) The first major step for the DNA Replication to take place is the breaking of hydrogen bonds between bases of the two anti parallel strands. The unwounding of the two strands is the starting point. The splitting happens in places of the chains which are rich in A-T. That is because there are only two bonds between Adenine and Thymine (there are three hydrogen bonds between Cytosine and Guanine). Helicase is the enzyme that splits the two strands. The initiation point where the splitting starts is called "origin of replication". The structure that is created is known as "Replication Fork".
 

2) One of the most important steps of DNA Replication is the binding of RNA Primase in the initiation point of the 3'-5' parent chain. RNA Primase can attract RNA nucleotides which bind to the DNA nucleotides of the 3'-5' strand due to the hydrogen bonds between the bases. RNA nucleotides are the primers (starters) for the binding of DNA nucleotides.

3) The elongation process is different for the 5'-3' and 3'-5' template.
a) 5'-3' Template: The 3'-5' proceeding daughter strand -that uses a 5'-3' template- is called leading strand because DNA Polymerase ä can "read" the template and continuously adds nucleotides (complementary to the nucleotides of the template, for example Adenine opposite to Thymine etc).

b) 3'-5'Template: The 3'-5' template cannot be "read" by DNA Polymerase A. The replication of this template is complicated and the new strand is called lagging strand. In the lagging strand the RNA Primase adds more RNA Primers. DNA polymerase A reads the template and lengthens the bursts. The gap between two RNA primers is called "Okazaki Fragments". The RNA Primers are necessary for DNA Polymerase A to bind Nucleotides to the 3' end of them. The daughter strand is elongated with the binding of more DNA nucleotides



4) In the lagging strand the DNA Pol I - exonuclease- reads the fragments and removes the RNA Primers. The gaps are closed with the action of DNA Polymerase (adds complementary nucleotides to the gaps) and DNA Ligase (adds phosphate in the remaining gaps of the phosphate - sugar backbone). Each new double helix is consisted of one old and one new chain. This is what we call semiconservative replication.




5) The last step of DNA Replication is the Termination. This process happens when the DNA Polymerase reaches to an end of the strands. We can easily understand that in the last section of the lagging strand, when the RNA primer is removed, it is not possible for the DNA Polymerase to seal the gap (because there is no primer). So, the end of the parental strand where the last primer binds isn't replicated. These ends of linear (chromosomal) DNA consist of noncoding DNA that contains repeat sequences and are called telomeres. As a result, a part of the telomere is removed in every cycle of DNA Replication.
6) The DNA Replication is not completed before a mechanism of repair fixes possible errors caused during the replication. Enzymes like nucleases remove the wrong nucleotides and the DNA Polymerase fills the gaps.
**************
3.1.4.3 Nucleotides 
Nucleotides are nitrogen containing organic compounds, which form the monomers of nucleic acids that are involved in the information transfer system of the cells. They are also involved in the mechanism of energy transfer in cells.
A nucleotide is a compound containing carbon, hydrogen, oxygen, nitrogen, and phosphorous. A molecule of a nucleotide is in turn composed of three smaller molecules phosphate (P) sugar (S) and nitrogen base (N).
•    The phosphate group is represented by phosphoric acid (H3PO4).
•    The sugar molecule in the nucleotide is a 5-carbon pentose sugar. It is represented by either ribose sugar (C5H10O5) or deoxyribose sugar (C5H10O4). Both the sugars have a furanose ring structure.
•    The nitrogen base is represented by compounds having nitrogen and carbon in the ring structure. Two types of nitrogen bases occur, namely
a) Purines, which have a double ring structure and
b) Pyrimidines, which have a single ring structure.

 
Components of Nucleic Acids

Purines are of two types adenine (A) and guanine (G). Pyrimidines are of three types Cytosine (C), Thymine (T), and Uracil (U).


Structures of Nitrogenous Bases in Nucleic Acids
The nitrogen base molecule is attached to the sugar molecule by a glycosidic bond. A combination of nitrogen base with sugar is called as a nucleoside. Nucleosides involving ribose sugars are called ribonucleosides. Similarly, nucleosides involving deoxyribose sugars are called deoxy ribonucleosides.
A nucleoside combines with a phosphate group to form a compound called nucleotide. Nucleotides formed by ribonucleosides are called ribonucleotides. They form the monomers of ribose nucleic acid (RNA). Nucleotides formed by deoxyribonucleosides are called deoxyribonucleotides. They form the monomers of deoxyribose nucleic acid (DNA).

Formation of a Nucleotide
The nucleotides which form nucleic acids have only one phosphate group (monophosphates). Each of them can form a diphosphate and a triphosphate. Linkage of additional phosphate group occurs at the cost of a large amount of energy. The bonds that join the additional phosphate groups are called as high energy or energy rich bonds. Separation of these additional phosphate groups from the nucleotides by enzymatic hydrolysis releases correspondingly large amount of energy. Hence, these higher nucleotides (with one or two additional phosphates) form energy rich compounds.
Adenine + Pentose Sugar - Adenosine (Adenine Nucleoside)
Adenosine + Phosphate - Adenylic Acid or Adenosine Mono Phosphate (AMP)(Adenine Nucleotide)
Adenosine Monophosphate + Phosphate - Adenosine Diphosphate (ADP)
Adenosine Diphosphate + Phosphate - Adenosine Triphosphate (ATP)
Similarly, for other nitrogen bases.  The nucleotides by combining with other organic compounds or molecules form co-enzymes. For example, NAD Nicotinamide Adenine Dinucleotide, FAD-Flavin Adenine Dinucleotide.  Some nucleotides like cyclic AMP function as regulatory chemicals controlling diverse cellular functions.
***************
3.1.4.3 Different types of RNA
Ribonucleic acid (RNA) is a biologically important type of molecule that consists of a long chain of nucleotide units. Each nucleotide consists of a nitrogenous base, a ribose sugar, and a phosphate. RNA is very similar to DNA, but differs in a few important structural details: in the cell, RNA is usually single-stranded, while DNA is usually double-stranded; RNA nucleotides contain ribose while DNA contains deoxyribose (a type of ribose that lacks one oxygen atom); and RNA has the base uracil rather than thymine that is present in DNA. RNA is transcribed from DNA by enzymes called RNA polymerases and is generally further processed by other enzymes. RNA is central to protein synthesis. Here, a type of RNA called messenger RNA carries information from DNA to structures called ribosomes. These ribosomes are made from proteins and ribosomal RNAs, which come together to form a molecular machine that can read messenger RNAs and translate the information they carry into proteins.
Messenger RNA (mRNA)
Messenger RNA (mRNA) is a molecule of RNA encoding a chemical "blueprint" for a protein product. mRNA is transcribed from a DNA template, and carries coding information to the sites of protein synthesis: the ribosomes. Here, the nucleic acid polymer is translated into a polymer of amino acids: a protein. In mRNA as in DNA, genetic information is encoded in the sequence of nucleotides arranged into codons consisting of three bases each. Each codon encodes for a specific amino acid, except the stop codons that terminate protein synthesis. This process requires two other types of RNA: transfer RNA (tRNA) mediates recognition of the codon and provides the corresponding amino acid, while ribosomal RNA (rRNA) is the central component of the ribosome's protein manufacturing machinery.
Structure


1.  5' cap
The 5' cap is a modified guanine nucleotide added to the "front" (5' end) of the pre-mRNA using a 5'-5'-triphosphate linkage. This modification is critical for recognition and proper attachment of mRNA to the ribosome, as well as protection from 5' exonucleases. It may also be important for other essential processes, such as splicing and transport.
2. Coding regions
Coding regions are composed of codons, which are decoded and translated (in eukaryotes usually into one and in prokaryotes usually into several) proteins by the ribosome. Coding regions begin with the start codon and end with a stop codon. Generally, the start codon is an AUG triplet and the stop codon is UAA, UAG, or UGA. The coding regions tend to be stabilised by internal base pairs, this impedes degradation. In addition to being protein-coding, portions of coding regions may serve as regulatory sequences in the pre-mRNA as exonic splicing enhancers or exonic splicing silencers.
3. Untranslated regions
Untranslated regions (UTRs) are sections of the mRNA before the start codon and after the stop codon that are not translated, termed the five prime untranslated region (5' UTR) and three prime untranslated region (3' UTR), respectively. These regions are transcribed with the coding region and thus are exonic as they are present in the mature mRNA. Several roles in gene expression have been attributed to the untranslated regions, including mRNA stability, mRNA localization, and translational efficiency. The ability of a UTR to perform these functions depends on the sequence of the UTR and can differ between mRNAs.
The stability of mRNAs may be controlled by the 5' UTR and/or 3' UTR due to varying affinity for RNA degrading enzymes called ribonucleases and for ancillary proteins that can promote or inhibit RNA degradation.
Translational efficiency, including sometimes the complete inhibition of translation, can be controlled by UTRs. Proteins that bind to either the 3' or 5' UTR may affect translation by influencing the ribosome's ability to bind to the mRNA. MicroRNAs bound to the 3' UTR also may affect translational efficiency or mRNA stability.
Cytoplasmic localization of mRNA is thought to be a function of the 3' UTR. Proteins that are needed in a particular region of the cell can actually be translated there; in such a case, the 3' UTR may contain sequences that allow the transcript to be localized to this region for translation.
Some of the elements contained in untranslated regions form a characteristic secondary structure when transcribed into RNA. These structural mRNA elements are involved in regulating the mRNA. Some, such as the SECIS element, are targets for proteins to bind. One class of mRNA element, the ribo switches, directly binds small molecules, changing their fold to modify levels of transcription or translation. In these cases, the mRNA regulates itself.
4. Poly (A) tail
The 3' poly (A) tail is a long sequence of adenine nucleotides (often several hundred) added to the 3' end of the pre-mRNA. This tail promotes export from the nucleus and translation, and protects the mRNA from degradation.
5. Monocistronic versus polycistronic mRNA
An mRNA molecule is said to be monocistronic when it contains the genetic information to translate only a single protein. This is the case for most of the eukaryotic mRNAs[5][6]. On the other hand, polycistronic mRNA carries the information of several genes, which are translated into several proteins. These proteins usually have a related function and are grouped and regulated together in an operon. Most of the mRNA found in bacteria and archea are polycistronic. Dicistronic or bicistronic is the term used to describe an mRNA that encodes only two proteins.
6. mRNA circularization
In eukaryotes it is thought that mRNA molecules form circular structures due to an interaction between the cap binding complex and poly (A)-binding protein. Circularization is thought to promote recycling of ribosomes on the same message leading to efficient translation.
**************** 
Ribosomal RNA (rRNA) 
Ribosomal ribonucleic acid (rRNA) is the RNA component of the ribosome, the protein manufacturing machinery of all living cells. Ribosomal RNA provides a mechanism for decoding mRNA into amino acids and interacts with tRNAs during translation by providing peptidyl transferase activity. The tRNAs bring the necessary amino acids corresponding to the appropriate mRNA codon.
Inside Ribosome

The ribosomal RNAs form two subunits, the large subunit (LSU) and small subunit (SSU). mRNA is sandwiched between the small and large subunits and the ribosome catalyzes the formation of a peptide bond between the 2 amino acids that are contained in the rRNA.


A ribosome also has 3 binding sites called A, P, and E.
    The A site in the ribosome binds to an aminoacyl-tRNA (a tRNA bound to an amino acid).
    The amino (NH2) group of the aminoacyl-tRNA, which contains the new amino acid, attacks the ester linkage of peptidyl-tRNA (contained within the P site), which contains the last amino acid of the growing chain, forming a new peptide bond. This reaction is catalyzed by peptidyl transferase.
    The tRNA that was holding on the last amino acid is moved to the E site, and what used to be the aminoacyl-tRNA is the peptidyl-tRNA.
A single mRNA can be translated simultaneously by multiple ribosomes.
Pprokaryotic and Eukaryotic ribosomes   
Both prokaryotic and eukaryotic ribosomes can be broken down into two subunits (the S in 16S represents Svedberg units):
Type    Size    Large subunit    Small subunit
prokaryotic    70S    50S (5S, 23S)
30S (16S)

eukaryotic    80S    60S (5S, 5.8S, 28S)
40S (18S)


Translation
Translation is the net effect of proteins being synthesized by ribosomes, from a copy (mRNA) of the DNA template in the nucleus. One of the components of the ribosome (16S rRNA) base pairs complementary to a sequence upstream of the start codon in mRNA.
Importance of rRNA
Ribosomal RNA characteristics are important in medicine and in evolution.
    rRNA is the target of several clinically relevant antibiotics: chloramphenicol, erythromycin, kasugamycin, micrococcin, paromomycin, ricin, sarcin,  spectinomycin, streptomycin, and thiostrepton.
    rRNA is the most conserved (least variable) gene in all cells. For this reason, genes that encode the rRNA (rDNA) are sequenced to identify an organism's taxonomic group, calculate related groups, and estimate rates of species divergence. For this reason many thousands of rRNA sequences are known and stored in specialized databases such as RDP-II and SILVA.
************ 
Transfer RNA (tRNA)
Transfer RNA (tRNA) is a small RNA molecule (usually about 74-95 nucleotides) that transfers a specific active amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis during translation. It has a 3' terminal site for amino acid attachment. This covalent linkage is catalyzed by an aminoacyl tRNA synthetase. It also contains a three base region called the anticodon that can base pair to the corresponding three base codon region on mRNA. Each type of tRNA molecule can be attached to only one type of amino acid, but because the genetic code contains multiple codons that specify the same amino acid, tRNA molecules bearing different anticodons may also carry the same amino acid.
Structure


The structure of tRNA can be decomposed into its primary structure, its secondary structure(usually visualized as the cloverleaf structure), and its tertiary structure (all tRNAs have a similar L-shaped 3D structure that allows them to fit into the P and A sites of the ribosome). The cloverleaf structure becomes the 3D L-shaped structure through coaxial stacking of the helices which is a common RNA Tertiary Structure motif.
1.    The 5'-terminal phosphate group.
2.    The acceptor stem is a 7-base pair (bp) stem made by the base pairing of the 5'-terminal nucleotide with the 3'-terminal nucleotide (which contains the CCA 3'-terminal group used to attach the amino acid). The acceptor stem may contain non-Watson-Crick base pairs.
3.    The CCA tail is a cytosine-cytosine-adenine sequence at the 3' end of the tRNA molecule. This sequence is important for the recognition of tRNA by enzymes critical in translation. In prokaryotes, the CCA sequence is transcribed in some tRNA sequences. In most prokaryotic tRNAs and eukaryotic tRNAs, the CCA sequence is added during processing and therefore does not appear in the tRNA gene.
4.    The D arm is a 4 bp stem ending in a loop that often contains dihydrouridine.
5.    The anticodon arm is a 5-bp stem whose loop contains the anticodon.
6.    The T arm is a 5 bp stem containing the sequence TΨC where Ψ is a pseudouridine.
7.    Bases that have been modified, especially by methylation, occur in several positions outside the anticodon. The first anticodon base is sometimes modified to inosine (derived from adenine) or pseudouridine (derived from uracil).
Anticodon
An anticodon is a unit made up of three nucleotides that correspond to the three bases of the codon on the mRNA. Each tRNA contains specific anticodon triplet sequences that can base-pair to one or more codons for an amino acid. For example, the codon for lysine is AAA; the anticodon of a lysine tRNA might be UUU. Some anticodons can pair with more than one codon due to a phenomenon known as wobble base pairing. Frequently, the first nucleotide of the anticodon is one of two not found on mRNA: inosine and pseudouridine, which can hydrogen bond to more than one base in the corresponding codon position. In the genetic code, it is common for a single amino acid to be specified by all four third-position possibilities, or at least by both Pyrimidines and Purines; for example, the amino acid glycine is coded for by the codon sequences GGU, GGC, GGA, and GGG.
To provide a one-to-one correspondence between tRNA molecules and codons that specify amino acids, 61 types of tRNA molecules would be required per cell. However, many cells contain fewer than 61 types of tRNAs because the wobble base is capable of binding to several, though not necessarily all, of the codons that specify a particular amino acid. A minimum of 31 tRNA are required to translate, unambiguously, all 61 sense codons of the standard genetic code.[2]
Aminoacylation
Aminoacylation is the process of adding an aminoacyl group to a compound. It produces tRNA molecules with their CCA 3' ends covalently linked to an amino acid.  Each tRNA is aminoacylated (or charged) with a specific amino acid by an aminoacyl tRNA synthetase. There is normally a single aminoacyl tRNA synthetase for each amino acid, despite the fact that there can be more than one tRNA, and more than one anticodon, for an amino acid. Recognition of the appropriate tRNA by the synthetases is not mediated solely by the anticodon, and the acceptor stem often plays a prominent role.
Reaction:
1.    amino acid + ATP → aminoacyl-AMP + PPi
2.    aminoacyl-AMP + tRNA → aminoacyl-tRNA + AMP
Sometimes, certain organisms can have one or more aminoacyl tRNA synthetases missing. This leads to mischarging of the tRNA by a chemically related amino acid. The correct amino acid is made by enzymes that modify the mischarged amino acid to the correct one.
For example, Helicobacter pylori have glutaminyl tRNA synthetase missing. Thus, glutamate tRNA synthetase mischarged tRNA-glutamine (tRNA-Gln) with glutamate. An amidotransferase then converts the acid side chain of the glutamate to the amide, forming the correctly charged gln-tRNA-Gln.
Binding to ribosome
The ribosome has three binding sites for tRNA molecules: the A (aminoacyl), P (peptidyl), and E (exit) sites. During translation the A site binds an incoming aminoacyl-tRNA as directed by the codon currently occupying this site. This codon specifies the next amino acid to be added to the growing peptide chain. The A site only works after the first aminoacyl-tRNA has attached to the P site. The P-site codon is occupied by peptidyl-tRNA that is a tRNA with multiple amino acids attached as a long chain. The P site is actually the first to bind to aminoacyl tRNA. This tRNA in the P site carries the chain of amino acids that has already been synthesized. The E site is occupied by the empty tRNA as it's about to exit the ribosome.
tRNA genes
Organisms vary in the number of tRNA genes in their genome. The nematode worm C. elegans, a commonly used model organism in genetics studies, has 29,647 genes in its nuclear genome, of which 620 code for tRNA. The budding yeast Saccharomyces cerevisiae has 275 tRNA genes in its genome. In the human genome, which according to current estimates has about 27,161 genes in total, there are about 4,421 non-coding RNA genes, which include tRNA genes. There are 22 mitochondrial tRNA genes; 497 nuclear genes encoding cytoplasmic tRNA molecules and there are 324 tRNA-derived putative pseudogenes.
Cytoplasmic tRNA genes can be grouped into 49 families according to their anticodon features. These genes are found on all chromosomes, except 22 and Y chromosome. High clustering on 6p is observed (140 tRNA genes), as well on 1 chromosome.
***************
UNIT – IV
4.1 Paper Chromatography
Abstract
Chromatography is a method used to separate mixtures of compounds and to identify each compound in the mixture. You may have separated the different inks in a black marker by using a strip of paper and water. Chromatography is used by analytical chemists, organic chemists, and many other types of scientists since it is so easy and affordable. If you want to get a head start in chemistry, this is a great way to do so.
Introduction
Matter and Mixtures
Matter makes up everything in the universe. Our body, the stars, computers, and coffee mugs are all made of matter. There are three different types of matter: solid, liquid, and gas. A solid is something that is normally hard (your bones, the floor under your feet, etc.), but it can also be powdery, like sugar or flour. Solids are substances that are rigid and have definite shapes. Liquids flow and assume the shape of their container; they are also difficult to compress (a powder can take the same shape as its container, but it is a collection of solids that are very small). Examples of liquids are milk, orange juice, water, and vegetable oil. Gases are around you all the time, but you may not be able to see them. The air we breathe is made up of a mixture of gases. The steam from boiling water is water's gaseous form. Gases can occupy all the parts of a container (they expand to fill their containers), and they are easily compressed.
Matter is often a mixture of different substances. A heterogeneous mixture is when the mixture is made up of parts that are dissimilar (sand is a heterogeneous mixture). Homogeneous mixtures (also called solutions) are uniform in structure (milk is a homogeneous mixture). A sugar cube floating in water is a heterogeneous mixture, whereas sugar dissolved in water is a homogeneous mixture. You will determine whether the ink contained in a marker is a heterogeneous or homogeneous mixture, or just one compound.
In a mixture, the substance dissolved in another substance is called the solute. The substance doing the dissolving is called the solvent. If you dissolve sugar in water, the sugar is the solute and the water is the solvent.
Chromatography
For this topic, you will be making a small spot with an ink marker onto a strip of paper. The bottom of this strip will then be placed in a dish of water, and the water will soak up into the paper. The water (solvent) is the mobile phase of the chromatography system, whereas the paper is the stationary phase. These two phases are the basic principles of chromatography. Chromatography works by something called capillary action. The attraction of the water to the paper (adhesion force) is larger than the attraction of the water to itself (cohesion force); hence the water moves up the paper. The ink will also be attracted to the paper, to itself, and to the water differently, and thus a different component will move a different distance depending upon the strength of attraction to each of these objects.
To measure how far each component travels, we calculate the retention factor (Rf value) of the sample. The Rf value is the ratio between how far the component travels and the distance the solvent travels from a common starting point (the origin). If one of the sample components moves 2.5 cm up the paper and the solvent moves 5.0 cm, then the Rf value is 0.5. You can use Rf values to identify different components as long as the solvent, temperature, pH, and type of paper remain the same. In the image below, the light blue shading represents the solvent and the dark blue spot is the chemical sample.


When measuring the distance the sample traveled, you should measure from the origin (where the middle of the spot originally was) and then to the center of the spot in its new location.
To calculate the Rf value, we use the equation:
        Distance traveled by the sample component
Rf =       ------------------------------------------------------------------- 
        Distance traveled by the solvent

In our example, this would be:
        2.5 cm
Rf =       -----------   =  0.5
        5.0 cm
Note that an Rf value has no units because the units of distance cancel.
Polarity has a huge affect on how attracted a chemical is to other substances. Some molecules have a positively charged side and a negatively charged side, similar to a magnet. The positive side is attracted to the negative side of another molecule (opposites attract), and vice versa. The larger the charge difference, the more polar a molecule is. The reason for the unequal charge is that electrons (which are negatively charged) are not shared equally by each atom (in water, the negative electrons are more attracted to the oxygen because of its atomic structure). Some molecules, like vegetable oil, are neutral and do not have a charge associated with them; they are called nonpolar molecules. Polarity affects many of a molecule's properties, such as its affinity to water. Water is a very polar molecule, so other polar molecules are attracted to it easily. A molecule is called hydrophilic if it dissolves well in water (hydrophilic essentially means "loves water"). A nonpolar molecule, such as oil, does not dissolve well in water, and thus it is hydrophobic ("fears water"). Oil would rather stick to itself than to water, and this is why it forms a layer across water instead of mixing with it.
Soap can clean oils off of your body because soap has both polar and nonpolar properties. A soap molecule has a nonpolar, and thus hydrophobic, "tail" made up mostly of carbon and hydrogen atoms, but it also has a polar (hydrophilic) "head." The nonpolar body of the soap mixes easily with the nonpolar oils, but not with the water. The polar head is attracted to the water, so the soap/oil mixture is rinsed off. The negatively charged oxygen of the water is not attracted to any of the "tail's" hydrogen because the carbon and hydrogen share electrons almost equally, so there is not a major charge difference (the carbon-hydrogen group is neutral).
This technique provides an easy way to separate the components of a mixture.
•    A drop of mixture is placed in one corner of a square of absorbent paper.
•    One edge of the paper is immersed in a solvent. (a)
•    The solvent migrates up the sheet by capillary attraction.
•    As it does so, the substances in the drop are carried along at different rates. (b)
•    Each compound migrates at a rate that reflects
o    the size of its molecule and
o    its solubility in the solvent.
•    After a second run at right angles to the first (often using a different solvent), the various substances will be spread out at distinct spots across the sheet, forming a chromatogram. (c)
•    The identity of each spot can be determined by comparing its position with the position occupied by known substances under the same conditions.
•    In many cases, a fragment of the paper can be cut away from the sheet and chemical analysis run on the tiny amount of substance in it.


Techniques for paper chromatography
A small concentrated spot of solution that contains the sample of the solute is applied to a strip of chromatography paper about two centimeters away from the base of the plate, usually using a capillary tube for maximum precision. This sample is absorbed onto the paper and may form interactions with it. Any substance that reacts or bonds with the paper cannot be measured using this technique. The paper is then dipped into a suitable solvent, such as ethanol or water, taking care that the spot is above the surface of the solvent, and placed in a sealed container.
The solvent moves up the paper by capillary action, which occurs as a result of the attraction of the solvent molecules to the paper; this can also be explained as differential adsorption of the solute components into the solvent. As the solvent rises through the paper it meets and dissolves the sample mixture, which will then travel up the paper with the solvent solute sample. Different compounds in the sample mixture travel at different rates due to competition between the paper fibers and solvent for the solutes. Since paper is composed of cellulose, a polar substance, polar substances have a high affinity for the paper. Paper chromatography takes anywhere from several minutes to several hours.
In some cases, paper chromatography does not separate pigments completely; this occurs when two substances appear to have the same values in a particular solvent. In these cases, two-way chromatography is used to separate the multiple-pigment spots.
Ascending chromatography
In this method, the solvent is in pool at the bottom of the vessel in which the paper is supported. The ascending chromatogram is folded over the glass rod whose other half become the descending chromatogram. This technique gives as quick separation as that of the individual techniques.
Descending chromatography
In this method, the solvent is kept in a trough at the top of the chamber and is allowed to flow down the paper. The liquid moves down by capillary action as well as by the gravitational force, thus this method is also known as the gravitational method. In this case, the flow is more rapid as compared to the ascending method, and the chromatography is completed more quickly. The apparatus needed for this case is more sophisticated. The developing solvent is placed in a trough at the top which is usually made up of an inert material. The paper is then suspended in the solvent. Substances that cannot be separated by ascending method can sometimes be separated by the descending method.
Uses: 
Chromatography is used in many different industries and labs. The police and other investigators use chromatography to identify clues at a crime scene like blood, ink, or drugs. More accurate chromatography in combination with expensive equipment is used to make sure a food company's processes are working correctly and they are creating the right product. This type of chromatography works the same way as regular chromatography, but a scanner system in conjunction with a computer can be used to identify the different chemicals and their amounts. Chemists use chromatography in labs to track the progress of a reaction. By looking at the sample spots on the chromatography plate, they can easily find out when the products start to form and when the reactants have been used up (i.e., when the reaction is complete). Chemists and biologists also use chromatography to identify the compounds present in a sample, such as plants.
**************
Experimental Separation of Mixture:
Materials and Equipment
•    Chromatography paper or laboratory filter paper is preferable, but you can use a paper towel. The problem with paper towels is that they may be too absorptive and smear the sample.
•    acetone (nail-polish remover)
•    water
•    ruler
•    pencils
•    a small wide-mouth jar for the solvent chamber
•    spinach leaves
•    iceberg lettuce leaves
•    marigold leaves
•    small pipette, capillary tube, or eyedropper
Experimental Procedure
Note: To make sure you can compare your results, as many of your materials as possible should remain constant. This means that the temperature, brand of nail-polish remover, size of paper strips, where the ink is placed onto the solid phase, etc., should remain the same throughout the experiment.
1.    Grind up roughly equal samples of each of the blood samples and distribute them into test tubes. There should be at least three labeled test tubes for each type of plant.
2.    Add enough acetone (nail-polish remover) to suspend the ground-up leaves.
3.    Let the acetone/leaf mixture sit for 24 hours.
4.    Take a paper strip use the ruler to draw a horizontal straight line 2 cm above the bottom (this is the origin).
5.    Label what sample paper strip will contain (in pencil).
6.    Fill the jar to a depth of 1 cm with the acetone (nail-polish remover).
7.    Take one of the capillary tubes (or pipette or eyedropper) and fill with one of the samples.
8.    Spot the sample in the middle of the origin (see illustration, below). You might want to practice a few times in order to get a nice round spot.


9.    Place the strip of paper into the solvent chamber. Place a pencil across the top of the glass and tape the chromatography paper to it if the paper is not firm enough to stand on its own (see illustration, below).


10.    Take out the paper strip when the solvent has almost reached the top.
11.    Mark how far the solvent soaked up the strip/plate with a pencil.
12.    Trace around the newly-moved spots so that if they fade, you can still use them to collect data.
13.    Calculate the Rf value for each spot.
14.    Repeat this experiment for each of the three samples.
15.    Repeat this experiment for each type of blood sample.
***************
4.2    Principle and application of thin layer chromatography
Thin layer chromatography (TLC) is a chromatography technique used to separate mixtures. Thin layer chromatography is performed on a sheet of glass, plastic, or aluminum foil, which is coated with a thin layer of adsorbent material, usually silica gel, aluminium oxide, or cellulose (blotter paper). This layer of adsorbent is known as the stationary phase.
After the sample has been applied on the plate, a solvent or solvent mixture (known as the mobile phase) is drawn up the plate via capillary action. Because different analyses ascend the TLC plate at different rates, separation is achieved.
Thin layer chromatography can be used to:
    Monitor the progress of a reaction
    Identify compounds present in a given substance
    Determine the purity of a substance
Specific examples of these applications include:
    determination of the components a plant contains
    analyzing ceramides and fatty acids
    detection of pesticides or insecticides in food and water
    analyzing the dye composition of fibers in forensics, or
    assaying the radiochemical purity of radiopharmaceuticals
A number of enhancements can be made to the original method to automate the different steps, to increase the resolution achieved with TLC and to allow more accurate quantitation. This method is referred to as HPTLC, or "high performance TLC".
Plate preparation
TLC plates are usually commercially available, with standard particle size ranges to improve reproducibility. They are prepared by mixing the adsorbent, such as silica gel, with a small amount of inert binder like calcium sulfate (gypsum) and water. This mixture is spread as thick slurry on a un reactive carrier sheet, usually glass, thick aluminum foil, or plastic. The resultant plate is dried and activated by heating in an oven for thirty minutes at 110°C. The thickness of the adsorbent layer is typically around 0.1 – 0.25 mm for analytical purposes and around 0.5 – 2.0 mm for preparative TLC.
Technique
The process is similar to paper chromatography with the advantage of faster runs, better separations, and the choice between different stationary phases. Because of its simplicity and speed TLC is often used for monitoring chemical reactions and for the qualitative analysis of reaction products.
To run a TLC, the following procedure is carried out:
•    A small spot of solution containing the sample is applied to a plate, about 1.5 centimeters from the bottom edge. The solvent is allowed to completely evaporate off, otherwise a very poor or no separation will be achieved. If a non-volatile solvent was used to apply the sample, the plate needs to be dried in a vacuum chamber.   
•    A small amount of an appropriate solvent (elutant) is poured in to a glass beaker or any other suitable transparent container (separation chamber) to a depth of less that 1 centimeter. A strip of filter paper is put into the chamber, so that its bottom touches the solvent, and the paper lies on the chamber wall and reaches almost to the top of the container. The container is closed with a cover glass or any other lid and is left for a few minute to let the solvent vapors ascend the filter paper and saturate the air in the chamber. (Failure to saturate the chamber will result in poor separation and non-reproducible results).    
•    The TLC plate is then placed in the chamber so that the spot(s) of the sample DO NOT TOUCH the surface of the elutant in the chamber, and the lid is closed. The solvent moves up the plate by capillary action, meets the sample mixture and carries it up the plate (elutes the sample). When the solvent front reaches no higher than the top of the filter paper in the chamber, the plate should be removed (continue the elution will give a misleading results) and dried. 



Different compounds in the sample mixture travel at different rates due to the differences in their attraction to the stationary phase, and because of differences in solubility in the solvent. By changing the solvent, or perhaps using a mixture, the separation of components (measured by the Rf value) can be adjusted. Also, the separation achieved with a TLC plate can be used to estimate the separation of a flash chromatography column. 


Separation of compounds is based on the competition of the solute and the mobile phase for binding places on the stationary phase. For instance, if normal phase silica gel is used as the stationary phase it can be considered polar. Given two compounds which differ in polarity, the more polar compound has a stronger interaction with the silica and is therefore more capable to dispel the mobile phase from the binding places. Consequently, the less polar compound moves higher up the plate (resulting in a higher Rf value). If the mobile phase is changed to a more polar solvent or mixture of solvents, it is more capable of dispelling solutes from the silica binding places and all compounds on the TLC plate will move higher up the plate. It is commonly said that "strong" solvents (elutants) push the analyzed compounds up the plate, while "weak" elutants barely move them. The order of strength/weakness depends on the coating (stationary phase) of the TLC plate. For silica gel coated TLC plates, the elutant strength increases in the following order: Perfluoroalkane (weakest), Hexane, Pentane, Carbon tetracholoride, Benzene/Toluene, Dichloromethane, Diethyl ether, Ethyl acetate, Acetonitrile, Acetone, 2-Propanol/n-butanol, Water, Methanol, Triethlamine, Acetic acid, Formic acid (strongest).

For C18 coated plates the order is reverse. Practically this means that if you use a mixture of ethyl acetate and heptane as the mobile phase, adding more ethyl acetate results in higher Rf values for all compounds on the TLC plate. Changing the polarity of the mobile phase will normally not result in reversed order of running of the compounds on the TLC plate. An eluotropic series can be used as a guide in selecting a mobile phase. If a reversed order of running of the compounds is desired, an apolar stationary phase should be used, such as C18-functionalized silica.

Preparative TLC 
TLC can also be used on a small semi-preparative scale to separate mixtures of up to a few hundred milligrams. The mixture is not "spotted" on the TLC plate as dots, but rather is applied to the plate as a thin even layer horizontally to and just above the solvent level. When developed with solvent the compounds separate in horizontal bands rather than horizontally separated spots. Each band (or a desired band) is scraped off the backing material. The backing material is then extracted with a suitable solvent (e.g. DCM) and filtered to give the isolated material upon removal of the solvent. For small-scale reactions with easily separated products, preparative TLC can be a far more efficient in terms of time and cost than doing column chromatography. Obviously, the whole plate can not be chemically developed or the product will be chemically destroyed. Thus this technique is best used with compounds that are coloured, or visible under UV light. Alternatively, a small section of the plate can be chemically developed e.g. cutting a section out and chemically developing it, or masking most of the plate and exposing a small section to a chemical developer like iodine. 

Analysis 
As the chemicals being separated may be colorless, several methods exist to visualize the spots: 
Often a small amount of a fluorescent compound, usually manganese-activated zinc silicate, is added to the adsorbent that allows the visualization of spots under a blacklight (UV254). The adsorbent layer will thus fluoresce light green by itself, but spots of analyte quench this fluorescence.  
Iodine vapors are a general unspecific color reagent
Specific color reagents exist into which the TLC plate is dipped or which are sprayed onto the plate.  
In the case of lipids, the chromatogram may be transferred to a PVDF membrane and then subjected to further analysis, for example mass spectrometry, a technique known as Far-Eastern blotting. 
Once visible, the Rf value, or retention factor, of each spot can be determined by dividing the distance traveled by the product by the total distance traveled by the solvent (the solvent front). These values depend on the solvent used, and the type of TLC plate, and are not physical constants. Eluent on the thin layer is put on top of the plate
Applications 
In organic chemistry, reactions are qualitatively monitored with TLC. Spots sampled with a capillary tube are placed on the plate: a spot of starting material, a spot from the reaction mixture, and a "co-spot" with both. A small (3 by 7 cm) TLC plate takes a couple of minutes to run. The analysis is qualitative, and it will show if the starting material has disappeared, i.e. the reaction is complete, if any product has appeared, and how many products are generated (although this might be under-estimated due to co-elution). Unfortunately, TLCs from low-temperature reactions may give misleading results, because the sample is warmed to room temperature in the capillary, which can alter the reaction—the warmed sample analyzed by TLC is not the same as what is in the low-temperature flask. One such reaction is the DIBALH reduction of ester to aldehyde.

In one study TLC has been applied in the screening of organic reactions for example in the fine-tuning of BINAP synthesis from 2-naphthol. In this method the alcohol and catalyst solution (for instance iron (III) chloride) are place separately on the base line, then reacted and then instantly analyzed.
*************** 
4.3    Principle and application column chromatography
Column chromatography in chemistry is a method used to purify individual chemical compounds from mixtures of compounds. It is often used for preparative applications on scales from micrograms up to kilograms. 
The classical preparative chromatography column is a glass tube with a diameter from 5 mm to 50 mm and a height of 50 cm to 1 m with a tap at the bottom. Two methods are generally used to prepare a column; the dry method, and the wet method. For the dry method, the column is first filled with dry stationary phase powder, followed by the addition of mobile phase, which is flushed through the column until it is completely wet, and from this point is never allowed to run dry. For the wet method, a  slurry is prepared of the eluent with the stationary phase powder and then carefully poured into the column. Care must be taken to avoid air bubbles. A solution of the organic material is pipetted on top of the stationary phase. This layer is usually topped with a small layer of sand or with cotton or glass wool to protect the shape of the organic layer from the velocity of newly added eluent. Eluent is slowly passed through the column to advance the organic material. Often a spherical eluent reservoir or an eluent-filled and stoppered separating funnel is put on top of the column.
The individual components are retained by the stationary phase differently and separate from each other while they are running at different speeds through the column with the eluent. At the end of the column they elute one at a time. During the entire chromatography process the eluent is collected in a series of fractions. The composition of the eluent flow can be monitored and each fraction is analyzed for dissolved compounds, e.g. by analytical chromatography, UV absorption, or fluorescence. Colored compounds (or fluorescent compounds with the aid of an UV lamp) can be seen through the glass wall as moving bands.


Stationary phase
The stationary phase or adsorbent in column chromatography is a solid. The most common stationary phase for column chromatography is silica gel, followed by alumina. Cellulose powder has often been used in the past. Also possible are ion exchange chromatography, reversed-phase chromatography (RP), affinity chromatography or expanded bed adsorption (EBA). The stationary phases are usually finely ground powders or gels and/or are microporous for an increased surface; though in EBA a fluidized bed is used. 
Mobile phase
The mobile phase or eluent is either a pure solvent or a mixture of different solvents. It is chosen so that the retention factor value of the compound of interest is roughly around 0.2 - 0.3 in order to minimize the time and the amount of eluent to run the chromatography. The eluent has also been chosen so that the different compounds can be separated effectively. The eluent is optimized in small scale pretests, often using thin layer chromatography (TLC) with the same stationary phase.   
A faster flow rate of the eluent minimizes the time required to run a column and thereby minimizes diffusion, resulting in a better separation. A simple laboratory column runs by gravity flow. The flow rate of such a column can be increased by extending the fresh eluent filled column above the top of the stationary phase or decreased by the tap controls. Better flow rates can be achieved by using a pump or by using compressed gas (e.g. air, nitrogen, or argon) to push the solvent through the column (flash column chromatography).  
The particle size of the stationary phase is generally finer in flash column chromatography than in gravity column chromatography. For example, one of the most widely used silica gel grades in the former technique is mesh 230 – 400 (40 – 63 µm), while the latter technique typically requires mesh 70 – 230 (63 – 200 µm) silica gel.  
A spreadsheet that assists in the successful development of flash columns has been developed. The spreadsheet estimates the retention volume and band volume of analytes, the fraction numbers expected to contain each analyte, and the resolution between adjacent peaks. This information allows users to select optimal parameters for preparative-scale separations before the flash column itself is attempted.           

Application
Column chromatography is advantageous over most other chromatographic techniques because it can be used in both analytical and preparative applications. Not only can column chromatography be used to determine the number of components of a mixture, but it can also be used to separate and purify substantial quantities of those components for subsequent analysis. This is in contrast to paper chromatography, which is solely an analytical method.
For example, while paper chromatography is easily applied to see whether a purple coloured beverage contains a mixture of dyes, it is not practical to further analyze the separated dyes given the necessarily very small size of the initial sample. A preparative method like column chromatography allows you to do just that. Separating the purple food dye on an appropriately set up column with good technique will leave you with cleanly separated blue and red dyes in large enough amounts for further investigation. Thus, column chromatography should be used any time you want to separate a mixture of liquids or solutes into its components, and work with these components individually. In fact, it is the most frequently used method of purifying mixtures of products in research laboratories.
******************** 
4.4 Principle and application of Gas-liquid chromatography
Gas chromatography (GC), is a common type of chromatography used in analytic chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture (the relative amounts of such components can also be determined). In some situations, GC may help in identifying a compound. In preparative chromatography, GC can be used to prepare pure compounds from a mixture.

In gas chromatography, the moving phase (or "mobile phase") is a carrier gas, usually an inert  gas such as helium or an unreactive gas such as nitrogen. The stationary phase is a microscopic layer of liquid or polymer on an inert solid support, inside a piece of glass or metal tubing called a column (a homage to the fractionating column used in distillation). The instrument used to perform gas chromatography is called a gas chromatograph (or "aerograph", "gas separator"). 
The gaseous compounds being analyzed interact with the walls of the column, which is coated with different stationary phases. This causes each compound to elute at a different time, known as the retention time of the compound. The comparison of retention times is what gives GC its analytical usefulness. 
Gas chromatography is in principle similar to column chromatography (as well as other forms of chromatography, such as HPLC, TLC), but has several notable differences. Firstly, the process of separating the compounds in a mixture is carried out between a liquid stationary phase and a gas moving phase, whereas in column chromatography the stationary phase is a solid and the moving phase is a liquid. (Hence the full name of the procedure is "Gas-liquid chromatography", referring to the mobile and stationary phases, respectively.) Secondly, the column through which the gas phase passes is located in an oven where the temperature of the gas can be controlled, whereas column chromatography (typically) has no such temperature control. Thirdly, the concentration of a compound in the gas phase is solely a function of the vapor pressure of the gas.

Gas chromatography is also similar to fractional distillation, since both processes separate the components of a mixture primarily based on boiling point (or vapor pressure) differences. However, fractional distillation is typically used to separate components of a mixture on a large scale, whereas GC can be used on a much smaller scale (i.e. microscale).

History 
Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett. German graduate student Fritz prior developed solid state gas chromatography in 1947. Archer John Porter Martin, who was awarded the Nobel Prize for his work in developing liquid-liquid (1941) and paper (1944) chromatography, laid the foundation for the development of gas chromatography and he later produced liquid-gas chromatography (1950). Erika Cremer laid the groundwork, and oversaw much of Prior's work. 
GC analysis
 A gas chromatograph is a chemical analysis instrument for separating chemicals in a complex sample. A gas chromatograph uses a flow-through narrow tube known as the column, through which different chemical constituents of a sample pass in a gas stream (carrier gas, mobile phase) at different rates depending on their various chemical and physical properties and their interaction with a specific column filling, called the stationary phase. As the chemicals exit the end of the column, they are detected and identified electronically. The function of the stationary phase in the column is to separate different components, causing each one to exit the column at a different time (retention time). Other parameters that can be used to alter the order or time of retention are the carrier gas flow rate, and the temperature.  
In a GC analysis, a known volume of gaseous or liquid analyte is injected into the "entrance" (head) of the column, usually using a micro syringe (or, solid phase micro extraction fibers, or a gas source switching system). As the carrier gas sweeps the analyte molecules through the column, this motion is inhibited by the adsorption of the analyte molecules either onto the column walls or onto packing materials in the column. The rate at which the molecules progress along the column depends on the strength of adsorption, which in turn depends on the type of molecule and on the stationary phase materials. Since each type of molecule has a different rate of progression, the various components of the analyte mixture are separated as they progress along the column and reach the end of the column at different times (retention time). A detector is used to monitor the outlet stream from the column; thus, the time at which each component reaches the outlet and the amount of that component can be determined. Generally, substances are identified (qualitatively) by the order in which they emerge (elute) from the column and by the retention time of the analyte in the column.
Physical components


Auto samplers
The auto sampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.
Different kinds of auto samplers exist. Auto samplers can be classified in relation to sample capacity (auto-injectors vs. auto samplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis:
    Liquid
    Static head-space by syringe technology
    Dynamic head-space by transfer-line technology
    Solid phase micro extraction (SPME)
a complete range of auto samplers. Historically, the countries most active in auto sampler technology development are the United States, Italy and Switzerland.
Inlets
The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head.
Common inlet types are:
    S/SL (Split/Split less) injector; a sample is introduced into a heated small chamber via a syringe through a septum - the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas split less injection is best suited for trace analysis with low amounts of analytes. (<0.01%)
    On-column inlet; the sample is here introduced in its entirety without heat.
    PTV injector; Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 µL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the Programmed Temperature Vaporizing injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented.
    Gas source inlet or gas switching valve; gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream.
    P/T (Purge-and-Trap) system; An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port.
    SPME (solid phase micro extraction) offers a convenient, low-cost alternative to P/T systems with the versatility of a syringe and simple use of the S/SL port.
Columns
Two types of columns are used in GC:
    Packed columns are 1.5 – 10 m in length and have an internal diameter of 2 – 4 mm. The tubing is usually made of stainless steel or glass and contains a packing of finely divided, inert, solid support material (e.g. diatomaceous earth) that is coated with a liquid or solid stationary phase. The nature of the coating material determines what type of materials will be most strongly adsorbed. Thus numerous columns are available that are designed to separate specific types of compounds.
    Capillary columns have a very small internal diameter, on the order of a few tenths of millimeters, and lengths between 25–60 meters are common. The inner column walls are coated with the active materials (WCOT columns), some columns are quasi solid filled with many parallel micropores (PLOT columns). Most capillary columns are made of fused-silica (FSOT columns) with a polyimide outer coating. These columns are flexible, so a very long column can be wound into a small coil.
    New developments are sought where stationary phase incompatibilities lead to geometric solutions of parallel columns within one column. Among these new developments are:
    Internally heated microFAST columns, where two columns, an internal heating wire and a temperature sensor are combined within a common column sheath (microFAST);
    Micropacked columns (1/16" OD) are column-in-column packed columns where the outer column space has a packing different from the inner column space, thus providing the separation behavior of two columns in one. They can easily fit to inlets and detectors of a capillary column instrument.
The temperature-dependence of molecular adsorption and of the rate of progression along the column necessitates a careful control of the column temperature to within a few tenths of a degree for precise work. Reducing the temperature produces the greatest level of separation, but can result in very long elution times. For some cases temperature is ramped either continuously or in steps to provide the desired separation. This is referred to as a temperature program. Electronic pressure control can also be used to modify flow rate during the analysis, aiding in faster run times while keeping acceptable levels of separation.
The choice of carrier gas (mobile phase) is important, with hydrogen being the most efficient and providing the best separation. However, helium has a larger range of flow rates that are comparable to hydrogen in efficiency, with the added advantage that helium is non-flammable, and works with a greater number of detectors. Therefore, helium is the most common carrier gas used.
Detectors
A number of detectors are used in gas chromatography. The most common are the flame ionization detector (FID) and the thermal conductivity detector (TCD). Both are sensitive to a wide range of components, and both work over a wide range of concentrations. While TCDs are essentially universal and can be used to detect any component other than the carrier gas (as long as their thermal conductivities are different from that of the carrier gas, at detector temperature), FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. However, an FID cannot detect water. Both detectors are also quite robust. Since TCD is non-destructive, it can be operated in-series before an FID (destructive), thus providing complementary detection of the same analytes.
Other detectors are sensitive only to specific types of substances, or work well only in narrower ranges of concentrations. They include:
    discharge ionization detector (DID), which uses a high-voltage electric discharge  to produce ions.
    electron capture detector (ECD), which uses a radioactive Beta particle (electron) source to measure the degree of electron capture.
    flame photometric detector (FPD)
    flame ionization detector (FID)
    Hall electrolytic conductivity detector (ElCD)
    helium ionization detector (HID)
    Nitrogen Phosphorus Detector (NPD)
    Infrared Detector (IRD)
    mass selective detector (MSD)
    photo-ionization detector (PID)
    pulsed discharge ionization detector (PDD)
    thermal energy(conductivity) analyzer/detector (TEA/TCD)

Some gas chromatographs are connected to a mass spectrometer which acts as the detector. The combination is known as GC-MS. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS.

Methods
The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required.
Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development.

This image above shows the interior of a Geo Strata Technologies Eclipse Gas Chromatograph that runs continuously in three minute cycles. Two valves are used to switch the test gas into the sample loop. After filling the sample loop with test gas, the valves are switched again applying carrier gas pressure to the sample loop and forcing the sample through the Column for separation.
Carrier gas selection and flow rates
Typical carrier gases include helium, nitrogen, argon, hydrogen and air. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples, however, the carrier is sometimes selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred, because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection, for example, hydrogen is flammable, and high-purity helium can be difficult to obtain in some areas of the world. (See: Helium--occurrence and production.)
The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. Trade names for typical purities include "Zero Grade," "Ultra-High Purity (UHP) Grade," "4.5 Grade" and "5.0 Grade."
The carrier gas flow rate affects the analysis in the same way that temperature does (see above). The higher the flows rate the faster the analysis, but the lower the separation between analytes. Selecting the flow rate is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature.
With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure." The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. The pressure setting was not able to be varied during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids.
Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs.
Stationary compound selection
The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity than the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns there are more options available.
Inlet types and flow rates
The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the S/SL injector (SPME applications).
Sample size and injection technique
Sample injection





The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. The injection system, in the capillary gas chromatograph, should fulfil the following two requirements:
1.    The amount injected should not overload the column.
2.    The width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column.
Some general requirements, which a good injection technique should fulfill, are:
    It should be possible to obtain the column’s optimum separation efficiency.
    It should allow accurate and reproducible injections of small amounts of representative samples.
    It should induce no change in sample composition. It should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability.
    It should be applicable for trace analysis as well as for undiluted samples.


The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.)
The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated.
In general, the column temperature is selected to compromise between the length of the analysis and the level of separation.
A method which holds the column at the same temperature for the entire analysis is called "isothermal." Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp") and final temperature is called the "temperature program."
A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column.
Data reduction and analysis
Qualitative analysis:
Generally chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. In most modern applications however the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.
Quantitative analysis:
The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte).
In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra.
Application
In general, substances that vaporize below ca. 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance.
Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process.
Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring toxic substances in soil, air or water. GC is very accurate if used properly and can measure Pico moles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples.
In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of Lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyses hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with an FID. A complication with light gas analyses that include H2is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments are used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analyzing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than 2 separate ones. The sensitivity is less but this is a tradeoff for simplicity in the gas supply.
*************
4.5 Centrifuge

A centrifuge is a piece of equipment, generally driven by an electric motor (some older models were spun by hand), that puts an object in rotation around a fixed axis, applying a force perpendicular to the axis. The centrifuge works using the sedimentation principle, where the centripetal acceleration causes more dense substances to separate out along the radial direction (the bottom of the tube). By the same token, lighter objects will tend to move to the top (of the tube; in the rotating picture, move to the centre).
The rotating unit, called the rotor, has fixed holes drilled at an angle (to the vertical). Test tubes are placed in these slots and the rotor is spun. As the centrifugal force is in the horizontal plane and the tubes are fixed at an angle, the particles have to travel only a little distance before they hit the wall and drop down to the bottom. These angle rotors are very popular in the lab for routine use.
Schematic diagram


Theory
Protocols for centrifugation typically specify the amount of acceleration to be applied to the sample, rather than specifying a rotational speed such as revolutions per minute. The acceleration is often quoted in multiples of g, the standard acceleration due to gravity at the Earth's surface. This distinction is important because two rotors with different diameters running at the same rotational speed will subject samples to different accelerations.
Since the motion is circular the acceleration can be calculated as the product of the radius and the square of the angular velocity. Traditionally named "relative centrifugal force" (RCF), it is the measurement of the acceleration applied to a sample within a centrifuge and it is measured in units of gravity (times gravity or × "g"). It is given by

where
  is earth's gravitational acceleration,
  is the rotational radius,
  is the rotational speed, measured in revolutions per unit of time.
When the rotational speed is given in revolutions per minute (RPM) and the rotational radius is expressed in centimeters (cm) the above relationship becomes

where
  is the rotational radius measured in centimeters (cm),
  is rotational speed measured in revolutions per minute (RPM).

History and predecessors
A 19th-century hand cranked laboratory centrifuge. English military engineer Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag. In 1864, Antonin Prandtl invented the first dairy centrifuge in order to separate cream from milk. In 1879, Gustaf de Laval demonstrated the first continuous centrifugal separator, making its commercial application feasible.

Types
There are at least five types of centrifuge:
    preparative centrifuge
    analytical centrifuge
    angle fixed centrifuge
    swing head centrifuge
    haematocrit centrifuge
Industrial centrifuges may otherwise be classified according to the type of separation of the high density fraction from the low density one :
    Screen centrifuges, where the centrifugal acceleration allows the liquid to pass through a screen of some sort, through which the solids cannot go (due to granulometry larger than the screen gap or due to agglomeration). Common types are :
    Pusher centrifuges
    Peeler centrifuges
    Decanter centrifuges, in which there is no physical separation between the solid and liquid phase, rather an accelerated settling due to centrifugal acceleration. Common types are :
    Solid bowl centrifuges
    Conical plate centrifuges

Centrifugation
Centrifugation is a process that involves the use of the centrifugal force for the separation of mixtures, used in industry and in laboratory settings. More-dense components of the mixture migrate away from the axis of the centrifuge, while less-dense components of the mixture migrate towards the axis. Chemists and biologists may increase the effective gravitational force on a test tube so as to more rapidly and completely cause the precipitate ("pellet") to gather on the bottom of the tube. The remaining solution is properly called the "supernate" or "supernatant liquid". The supernatant liquid is then either quickly decanted from the tube without disturbing the precipitate, or withdrawn with a Pasteur pipette.
The rate of centrifugation is specified by the acceleration applied to the sample, typically measured in revolutions per minute (RPM) or g. The particles' settling velocity in centrifugation is a function of their size and shape, centrifugal acceleration, the volume fraction of solids present, the density difference between the particle and the liquid, and the viscosity.
In the chemical and food industries, special centrifuges can process a continuous stream of particle-laden liquid.
It is worth noting that centrifugation is the most common method used for uranium enrichment, relying on the slight mass difference between atoms of U238 and U235 in uranium hexafluoride gas.

Centrifugation in biotechnology
Microcentrifuges and super speed centrifuges
In micro centrifugation, centrifuges are run in batch to isolate small volumes of biological molecules or cells (prokaryotic and eukaryotic). Nuclei are also often purified via micro centrifugation. Microcentrifuge tubes generally hold 1.5-2 mL of liquid, and are spun at maximum angular speeds of 12000-13000 rpms. Microcentrifuges are small and have rotors that can quickly change speeds. Super speed centrifuges work similarly to microcentrifuges, but are conducted via larger scale processes. Super speed centrifuges are also used for purifying cells and nuclei, but in larger quantities. These centrifuges are used to purify 25-30 mL of solution within a tube. Additionally, larger centrifuges also reach higher angular velocities (around 30000 rpm) and also use a larger rotor.

Ultracentrifugation
Ultracentrifugation makes use of high centrifugal force for studying properties of biological particles. While microcentrifugation and super speed centrifugation are used strictly to purify cells and nuclei, ultracentrifugation can isolate much smaller particles, including ribosomes, proteins, and viruses. Ultracentrifuges can also be used in the study of membrane fractionation. This occurs because ultracentrifuges can reach maximum angular velocities in excess of 70000 rpm. Additionally, while microcentrifuges and super centrifuges separate particles in batch, ultracentrifuges can separate molecules in batch and continuous flow systems.
In addition to purification, analytical ultracentrifugation (AUC) can be used for determination of macromolecular properties, including the amino acid composition of a protein, the protein's current conformation, or properties of that conformation. In analytical ultracentrifuges, concentration of solute is measured using optical calibrations. For low concentrations, the Beer-Lambert law can be used to measure the concentration. Analytical ultracentrifuges can be used to simulate physiological conditions (correct pH and temperature).
In analytical ultracentrifuges, molecular properties can be modeled through sedimentation velocity analysis or sedimentation equilibrium analysis. In sedimentation velocity analysis, concentrations and solute properties are modeled continuously over time. Sedimentation velocity analysis can be used to determine the macromolecule's shape, mass, composition, and conformational properties. During sedimentation equilibrium analysis, centrifugation has stopped and particle movement is based on diffusion. This allows for modeling of the mass of the particle as well as the chemical equilibrium properties of interacting solutes.
Centrifugation analysis
Lamm equation
Particle dispersion and sedimentation can be analyzed using the Lamm equation. The calculation of the sedimentation coefficient and diffusion coefficient is useful for determining the physical properties of the molecule, including shape and conformational changes. However, the Lamm Equation is most ideal for modeling concentrations of ideal, non-interacting solutes. Chemical reactions are unaccounted for by this equation. Additionally, for large molecular weight particles, sedimentation is not always smooth. This may lead to the overestimation of the diffusion coefficient, or oscillation effects at the bottom of a solution cell.
Sigma analysis
Sigma analysis is a useful tool determining centrifuge properties. It is similar to the continuity equation that relates volumetric flow rate Q, fluid velocity u, and flow path cross-sectional Area A:
Q = uA
In the case of sigma analysis, u is replaced by vg,the settling velocity at centripetal acceleration of g (9.81 m/s2), Σ replaces area, and is a property of the type of centrifuge, and Q is the input fluid flow rate. Σ has the same units as area.
Q = 2vgΣ

Application
Isolating suspensions
Simple centrifuges are used in chemistry, biology, and biochemistry for isolating and separating suspensions. They vary widely in speed and capacity. They usually comprise a rotor containing two, four, six, or many more numbered wells within which the samples containing centrifuge tips may be placed.
Isotope separation
Other centrifuges, the first being the Zippe-type centrifuge, separate isotopes, and these kinds of centrifuges are in use in nuclear power and nuclear weapon programs.
Gas centrifuges are used in uranium enrichment. The heavier isotope of uranium (uranium-238) in the uranium hexafluoride gas tends to concentrate at the walls of the centrifuge as it spins, while the desired uranium-235 isotope is extracted and concentrated with a scoop selectively placed inside the centrifuge. It takes many thousands of centrifuges to enrich uranium enough for use in a nuclear reactor (around 3.5% enrichment), and many thousands more to enrich it to weapons-grade (around 90% enrichment) for use in nuclear weapons.

Aeronautics and astronautics
Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity.
    The US Air Force at Holloman Air Force Base, NM operates a human centrifuge. The centrifuge at Holloman AFB is operated by the aerospace physiology department for the purpose of training and evaluating prospective fighter pilots for high-g flight in Air Force fighter aircraft.
The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of freefall.

Earthquake and blast simulation
The geotechnical centrifuge is used for simulating blasts and earthquake phenomena.

Commercial applications
    Centrifuges with a batch weight of up to 2,200 kg per charge are used in the sugar industry to separate the sugar crystals from the mother liquor.
    Standalone centrifuges for drying (hand-washed) clothes – usually with a water outlet.
    Centrifuges are used in the attraction Mission: SPACE, located at Epcot in Walt Disney World, which propels riders using a combination of a centrifuge and a motion simulator to simulate the feeling of going into space.
    In soil mechanics, centrifuges utilize centrifugal acceleration to match soil stresses in a scale model to those found in reality.
    Large industrial centrifuges are commonly used in water and wastewater treatment to dry sludges. The resulting dry product is often termed cake, and the water leaving a centrifuge after most of the solids have been removed is called centrate.
    Large industrial centrifuges are also used in the oil industry to remove solids from the drilling fluid.
    Disc-stack centrifuges used by some companies in Oil Sands industry to separate small amounts of water and solids from bitumen
    Centrifuges are used to separate cream (remove fat) from milk.

*******************  
4.6 Spectroscopic techniques
Spectroscopy was originally the study of the interaction between radiation and matter as a function of wavelength (λ). Historically, spectroscopy referred to the use of visible light dispersed according to its wavelength, e.g. by a prism. Later the concept was expanded greatly to comprise any measurement of a quantity as a function of either wavelength or frequency. Thus, it also can refer to a response to an alternating field or varying frequency (ν). A further extension of the scope of the definition added energy (E) as a variable, once the very close relationship E = hν for photons was realized (h is the Planck constant). A plot of the response as a function of wavelength—or more commonly frequency—is referred to as a spectrum; see also spectral line width. 
Spectrometry is the spectroscopic technique used to assess the concentration or amount of a given chemical (atomic, molecular, or ionic) species. In this case, the instrument that performs such measurements is a spectrometer, spectrophotometer, or spectrograph.
Spectroscopy/spectrometry is often used in physical and analytical chemistry for the identification of substances through the spectrum emitted from or absorbed by them. Spectroscopy/spectrometry is also heavily used in astronomy and remote sensing. Most large telescopes have spectrometers, which are used either to measure the chemical composition and physical properties of astronomical objects or to measure their velocities from the Doppler shift of their spectral lines.
Classification of methods
1. Nature of excitation measured
The type of spectroscopy depends on the physical quantity measured. Normally, the quantity that is measured is intensity, of energy either absorbed or produced.
    Electromagnetic spectroscopy involves interactions of matter with electromagnetic radiation, such as light.
    Electron spectroscopy involves interactions with electron beams. Auger spectroscopy involves inducing the Auger effect with an electron beam. In this case the measurement typically involves the kinetic energy of the electron as variable.
    Acoustic spectroscopy involves the frequency of sound.
    Dielectric spectroscopy involves the frequency of an external electrical field
    Mechanical spectroscopy involves the frequency of an external mechanical stress, e.g. a torsion applied to a piece of material.

2. Measurement process
Most spectroscopic methods are differentiated as either atomic or molecular based on whether or not they apply to atoms or molecules. Along with that distinction, they can be classified on the nature of their interaction:
    Absorption spectroscopy uses the range of the electromagnetic spectra in which a substance absorbs. This includes atomic absorption spectroscopy and various molecular techniques, such as infrared, ultraviolet-visible and microwave spectroscopy.
    Emission spectroscopy uses the range of electromagnetic spectra in which a substance radiates (emits). The substance first must absorb energy. This energy can be from a variety of sources, which determines the name of the subsequent emission, like luminescence. Molecular luminescence techniques include spectrofluorimetry.
    Scattering spectroscopy measures the amount of light that a substance scatters at certain wavelengths, incident angles, and polarization angles. One of the most useful applications of light scattering spectroscopy is Raman spectroscopy.
Types
1. Ultraviolet-visible spectroscopy or ultraviolet-visible spectrophotometry (UV-Visor UV/Vis):

Ultraviolet-visible spectroscopy or ultraviolet-visible spectrophotometry (UV-Visor UV/Vis) refers to absorption spectroscopy in the ultraviolet-visible spectral region. This means it uses light in the visible and adjacent (near-UV and near-infrared (NIR)) ranges. The absorption in the visible range directly affects the perceived color of the chemicals involved. In this region of the electromagnetic spectrum, molecules undergo electronic transitions. This technique is complementary to fluorescence spectroscopy, in that fluorescence deals with transitions from the excited state to the ground state, while absorption measures transitions from the ground state to the excited state.
Beer-Lambert law
The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer-Lambert law:
  −  
Where A is the measured absorbance, I0 is the intensity of the incident light at a given wavelength, I is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of 1 / M * cm or often AU / M * cm.
The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.
The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example).
Practical considerations
The Beer-Lambert law has implicit assumptions that must be met experimentally for it to apply. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid.


Schematic diagram of UV/Vis spectrophotometer

Spectral bandwidth
A given spectrometer has a spectral bandwidth that characterizes how monochromatic the light is. If this bandwidth is comparable to the width of the absorption features, then the measured extinction coefficient will be altered. In most reference measurements, the instrument bandwidth is kept below the width of the spectral lines. When a new material is being measured, it may be necessary to test and verify if the bandwidth is sufficiently narrow. Reducing the spectral bandwidth will reduce the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.
Wavelength error
In liquids, the extinction coefficient usually changes slowly with wavelength. A peak of the absorbance curve (a wavelength where the absorbance reaches a maximum) is where the rate of change in absorbance with wavelength is smallest. Measurements are usually made at a peak to minimize errors produced by errors in wavelength in the instrument, which is an error due to having a different extinction coefficient than assumed.
Stray light
Another important factor is the purity of the light used. The most important factor affecting this is the stray light level of the monochromator  . The detector used is broadband, it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear.
As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 AU, which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.
Absorption flattening
At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer-Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring.
Solutions that are not homogeneous can show deviations from the Beer-Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles.[3] The deviations will be most noticeable under conditions of low concentration and high absorbance. The reference describes a way to correct for this deviation.
Ultraviolet-visible spectrophotometer
The instrument used in ultraviolet-visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light passing through a sample (I), and compares it to the intensity of light before it passes through the sample (Io). The ratio I / Io is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, A, is based on the transmittance:
A = − log(%T / 100%)
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a Tungsten filament (300-2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190-400 nm)— or more recently, light emitting diodes (LED) and Xenon arc lamps[4] for the visible wavelengths. The detector is typically a photodiode or a CCD. Photodiodes are used with monochromators, which filter the light so that only light of a single wavelength reaches the detector. Diffraction gratings are used with CCDs, which collects light of different wavelengths on different pixels.


Diagram of a single-beam UV/Vis spectrophotometer

A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. Io must be measured by removing the sample. This was the earliest design, but is still in common use in both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.
Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, L, in the Beer-Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.
A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. A standardized spectrum is formed by removing the concentration dependence and determining the extinction coefficient (ε) as a function of wavelength.

Applications
UV/Vis spectroscopy is routinely used in the quantitative determination of solutions of transition metal ions and highly conjugated organic compounds.
    Solutions of transition metal ions can be colored (i.e., absorb visible light) because d electrons within the metal atoms can be excited from one electronic state to another. The colour of metal ion solutions is strongly affected by the presence of other species, such as certain anions or ligands. For instance, the colour of a dilute solution of copper sulfate is a very light blue; adding ammonia intensifies the colour and changes the wavelength of maximum absorption (λmax).
    Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases.
    While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.
************

2. X-ray crystallography

X-ray crystallography is a method of determining the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and diffracts into many specific directions. From the angles and intensities of these diffracted beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their disorder and various other information.
Since many materials can form crystals — such as salts, metals, minerals, semiconductors, as well as various inorganic, organic and biological molecules — X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and functioning of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
In an X-ray diffraction measurement, a crystal is mounted on a goniometer and gradually rotated while being bombarded with X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different rotations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup.
X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted as a Fourier transform. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and small-angle X-ray scattering (SAXS). If the material under investigation is only available in the form of nanocrystalline powders or suffers from poor crystallinity, the methods of electron crystallography can be applied for determining the atomic structure.
For all above mentioned X-ray diffraction methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than the distribution of its atoms.
History
Crystals have long been admired for their regularity and symmetry, but they were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles.
Crystal symmetry was first investigated experimentally by Nicolas Steno (1669), who showed that the angles between the faces are the same in every exemplar of a particular type of crystal and by René Just Haüy (1784), who discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which are still used today for identifying crystal faces. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johann Hessel, Auguste Bravais, Yevgraf Fyodorov, Arthur Schonflies and (belatedly) William Barlow. From the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive.

X-rays were discovered by Wilhelm Conrad Rontgen in 1895, just as the studies of crystal symmetry were being concluded. Physicists were initially uncertain of the nature of X-rays, although it was soon suspected (correctly) that they were waves of electromagnetic radiation, in other words, another form of light. At that time, the wave model of light — specifically, the Maxwell theory of electromagnetic radiation — was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Single-slit experiments in the laboratory of Arnold Sommerfeld suggested the wavelength of X-rays was about 1 Angstrom. However, X-rays are composed of photons, and thus are not only waves of electromagnetic radiation but also exhibit particle-like properties. The photon concept was introduced by Albert Einstein in 1905, [but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. Therefore, these particle-like properties of X-rays, such as their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction in 1912 confirmed for most scientists that X-rays were a form of electromagnetic radiation.
X-ray analysis of crystals
Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions, determined by Bragg's law:

X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1-100 Angstroms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787 and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength to observe diffraction from crystals. Prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty.

Schematic diagram of X-ray crystallography

The X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the polarization of the scattered radiation. The intensity of Thomson scattering declines as 1/m² with the mass m of the charged particle that is scattering the radiation; hence, the atomic nuclei, which are thousands of times heavier than an electron, contribute negligibly to the scattered X-rays.
Mineralogy and metallurgy
Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals.
Early organic and small biological molecules
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.

X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), vitamin B12 (1945) and penicillin (1954), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
Biological macromolecular crystallography
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobinby Max Perutz and Sir John Cowdery Kendrew, for which they were awarded the Nobel Prize in Chemistry in 1962. Since that success, over 48970 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. For comparison, the nearest competing method in terms of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved 7806 chemical structures.[84] Moreover, crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is now used routinely by scientists to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it.[85] However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other means to solubilize them in isolation, and such detergents often interfere with crystallization. Such membrane proteins are a large component of the genome and include many proteins of great physiological importance, such as ion channels and receptors.

Other X-ray techniques
Other forms of elastic X-ray scattering include powder diffraction, SAXS and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.

These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.

The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph.
Methods
Overview of single-crystal X-ray diffraction




The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated. [Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an Angstrom and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an Angstrom . X-ray crystallography allows measuring the size of these oscillations.
Procedure
The technique of single-crystal X-ray crystallography has three basic steps. The first — and often most difficult — step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement — now called a crystal structure — is usually stored in a public database.
Data analysis
Crystal symmetry, unit cell, and image scaling
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms, which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging and scaling) and how the variations should be combined to yield the total electron density (phasing).
Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 243 possible are allowed for protein molecules which are almost always chiral.
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined.
Initial phasing
The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
    Ab initio phasing or direct methods - This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm),direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.[99][100]
    Molecular replacement - if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.[101]
    Anomalous X-ray scattering (MAD or SAD phasing) - the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and thence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A MAD experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.[102]
    Heavy atom methods (multiple isomorphous replacement) - If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in MAD phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by MAD phasing with selenomethionine.
Deposition of the structure
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules) or the Protein Data Bank (for protein structures). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins, are not deposited in public crystallographic databases.
*************  

3. NMR spectroscopy
Nuclear magnetic resonance spectroscopy, most commonly known as NMR spectroscopy, is the name given to a technique which exploits the magnetic properties of certain nuclei. For details regarding this phenomenon and its origins, refer to the nuclear magnetic resonance article. The most important applications for the organic chemist are proton NMR and carbon-13 NMR spectroscopy. In principle, NMR is applicable to any nucleus possessing spin.
Many types of information can be obtained from an NMR spectrum. Much like using infrared spectroscopy (IR) to identify functional groups, analysis of a NMR spectrum provides information on the number and type of chemical entities in a molecule. However, NMR provides much more information than IR.
The impact of NMR spectroscopy on the natural sciences has been substantial. It can, among other things, be used to study mixtures of analytes, to understand dynamic effects such as change in temperature and reaction mechanisms, and is an invaluable tool in understanding protein and nucleic acid structure and function. It can be applied to a wide variety of samples, both in the solution and the solid state.
Basic NMR techniques
When placed in a magnetic field, NMR active nuclei (such as 1H or 13C) absorb at a frequency characteristic of the isotope. The resonant frequency, energy of the absorption and the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21 tesla magnetic field, protons resonate at 900 MHz. It is common to refer to a 21 T magnet as a 900 MHz magnet, although different nuclei resonate at a different frequency at this field strength.
In the Earth's magnetic field the same nuclei resonate at audio frequencies. This effect is used in Earth's field NMR spectrometers and other instruments. Because these instruments are portable and inexpensive, they are often used for teaching and field work.


Schematic diagram of NMR
Chemical shift
Depending on the local chemical environment, different protons in a molecule resonate at slightly different frequencies. Since both this frequency shift and the fundamental resonant frequency are directly proportional to the strength of the magnetic field, the shift is converted into a field-independent dimensionless value known as the chemical shift. The chemical shift is reported as a relative measure from some reference resonance frequency. This difference between the frequency of the signal and the frequency of the reference is divided by frequency of the reference signal to give the chemical shift. The frequency shifts are extremely small in comparison to the fundamental NMR frequency. A typical frequency shift might be 100 Hz, compared to a fundamental NMR frequency of 100 MHz, so the chemical shift is generally expressed in parts per million (ppm). To be able to detect such small frequency differences it is necessary, that the external magnetic field varies much less throughout the sample volume. High resolution NMR spectrometers use shimsto adjust the homogeneity of the magnetic field to parts per billion (ppb) in a volume of a few cubic centimeters.
By understanding different chemical environments, the chemical shift can be used to obtain some structural information about the molecule in a sample. The conversion of the raw data to this information is called assigning the spectrum. For example, for the 1H-NMR spectrum for ethanol (CH3CH2OH), one would expect three specific signals at three specific chemical shifts: one for the CH3 group, one for the CH2 group and one for the OH group. A typical CH3 group has a shift around 1 ppm, a CH2 attached to an OH has a shift of around 4 ppm and an OH has a shift around 2–3 ppm depending on the solvent used.
Because of molecular motion at room temperature, the three methyl protons average out during the course of the NMR experiment (which typically requires a few ms). These protons become degenerate and form a peak at the same chemical shift.
The shape and size of peaks are indicators of chemical structure too. In the example above—the proton spectrum of ethanol—the CH3 peak would be three times as large as the OH. Similarly the CH2 peak would be twice the size of the OH peak but only 2/3 the size of the CH3peak.
Modern analysis software allows analysis of the size of peaks to understand how many protons give rise to the peak. This is known as integration—a mathematical process which calculates the area under a graph (essentially what a spectrum is). The analyst must integrate the peak and not measure its height because the peaks also have width—and thus its size is dependent on its area not its height. However, it should be mentioned that the number of protons, or any other observed nucleus, is only proportional to the intensity, or the integral, of the NMR signal, in the very simplest one-dimensional NMR experiments. In more elaborate experiments, for instance, experiments typically used to obtain carbon-13 NMR spectra, the integral of the signals depends on the relaxation rate of the nucleus, and its scalar and dipolar coupling constants. Very often these factors are poorly known - therefore, the integral of the NMR signal is very difficult to interpret in more complicated NMR experiments.
J-coupling
Some of the most useful information for structure determination in a one-dimensional NMR spectrum comes from J-coupling or scalar coupling (a special case of spin-spin coupling) between NMR active nuclei. This coupling arises from the interaction of different spin states through the chemical bonds of a molecule and results in the splitting of NMR signals. These splitting patterns can be complex or simple and, likewise, can be straightforwardly interpretable or deceptive. This coupling provides detailed insight into the connectivity of atoms in a molecule.
Coupling to n equivalent (spin ½) nuclei splits the signal into a n+1 multiplet with intensity ratios following Pascal's triangle as described on the right. Coupling to additional spins will lead to further splitting of each component of the multiplet e.g. coupling to two different spin ½ nuclei with significantly different coupling constants will lead to a doublet of doublets (abbreviation: dd). Note that coupling between nuclei that are chemically equivalent (that is, have the same chemical shift) has no effect of the NMR spectra and couplings between nuclei that are distant (usually more than 3 bonds apart for protons in flexible molecules) are usually too small to cause observable splitting. Long-range couplings over more than three bonds can often be observed in cyclic and aromatic compounds, leading to more complex splitting patterns.
For example, in the proton spectrum for ethanol described above, the CH3 group is split into a triplet with an intensity ratio of 1:2:1 by the two neighboring CH2 protons. Similarly, the CH2 is split into a quartet with an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange of the acidic hydroxyl proton often results in a loss of coupling information.
Coupling to any spin ½ nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the magnitudes of the coupling constants may be very different). But the splitting patterns differ from those described above for nuclei with spin greater than ½ because the spin quantum number has more than two possible values. For instance, coupling to deuterium (a spin 1 nucleus) splits the signal into a 1:1:1 triplet because the spin 1 has three spin states. Similarly, a spin 3/2 nucleus splits a signal into a 1:1:1:1 quartet and so on.
Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen, coupling is often the only way to distinguish different nuclei.
Second-order (or strong) coupling
The above description assumes that the coupling constant is small in comparison with the difference in NMR frequencies between the in equivalent spins. If the shift separation decreases (or the coupling strength increases), the multiplet intensity patterns are first distorted, and then become more complex and less easily analyzed (especially if more than two spins are involved). Intensification of some peaks in a multiplet is achieved at the expense of the remainder, which sometimes almost disappear in the background noise, although the integrated area under the peaks remains constant. In most high-field NMR, however, the distortions are usually modest and the characteristic distortions (roofing) can in fact help to identify related peaks.
Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e. high-frequency) NMR spectra display less distortion than lower frequency spectra. Early spectra at 60 MHz were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz or above.
Magnetic in equivalence
More subtle effects can occur if chemically equivalent spins (i.e. nuclei related by symmetry and so having the same NMR frequency) have different coupling relationships to external spins. Spins that are chemically equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically in equivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other pair. Magnetic in equivalence can lead to highly complex spectra which can only be analyzed by computational modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while conformational averaging about C-C bonds in flexible molecules tends to equalize the couplings between protons on adjacent carbons, reducing problems with magnetic in equivalence.
Correlation spectroscopy
Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR) spectroscopy. This type of NMR experiment is best known by its acronym, COSY. Other types of two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Over hauser effect spectroscopy (NOESY), total correlation spectroscopy (TOCSY) and heteronuclear correlation experiments, such as HSQC, HMQC, and HMBC. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at University Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976.
Solid-state nuclear magnetic resonance
A variety of physical circumstances does not allow molecules to be studied in solution and at the same time not by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals, microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical shift anisotropy that becomes dominant to the behavior of the nuclear spin systems. In conventional solution-state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines. A variety of techniques allows establishing high-resolution conditions that can, at least for 13C spectra, be comparable to solution-state NMR spectra.
Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method, when the system comprises spin 1/2 nuclei. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is currently being used in NMR spectroscopy.
Applications in which solid-state NMR effects occur are often related to structure investigations on membrane proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include "exotic" applications like the plant leaves and fuel cells.
NMR spectroscopy applied to proteins
Much of the recent innovation within NMR spectroscopy has been within the field of protein NMR, which has become a very important technique in structural biology. One common goal of these investigations is to obtain high resolution 3-dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to X-ray crystallography, NMR is primarily limited to relatively small proteins, usually smaller than 35 kDa, though technical advances allow ever larger structures to be solved. NMR spectroscopy is often the only way to obtain high resolution information on partially or wholly intrinsically unstructured proteins. It is now a common tool for the determination of Conformation Activity Relationships where the structure before and after interaction with, for example, a drug candidate is compared to its known biochemical activity.
Proteins are orders of magnitude larger than the small organic molecules discussed earlier in this article, but the same NMR theory applies. Because of the increased number of each element present in the molecule, the basic 1D spectra become crowded with overlapping signals to an extent where analysis is impossible. Therefore, multidimensional (2, 3 or 4D) experiments have been devised to deal with this problem. Subsequently, the obtained distances are used to generate a 3D structure of the molecule by solving a distance geometry problem.
************
4. Infrared spectroscopy
Infrared spectroscopy (IR spectroscopy) is the subset of spectroscopy that deals with the infrared region of the electromagnetic spectrum. It covers a range of techniques, the most common being a form of absorption spectroscopy. As with all spectroscopic techniques, it can be used to identify compounds and investigate sample composition. A common laboratory instrument that uses this technique is an infrared spectrophotometer.
The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The far-infrared, approximately 400–10 cm−1 (1000–30 μm), lying adjacent to the microwave region, has low energy and may be used for rotational spectroscopy. The mid-infrared, approximately 4000–400 cm−1 (30–2.5 μm) may be used to study the fundamental vibrations and associated rotational-vibrational structure. The higher energy near-IR, approximately 14000–4000 cm−1 (2.5–0.8 μm) can excite overtone or harmonic vibrations. The names and classifications of these sub regions are merely conventions. They are neither strict divisions nor based on exact molecular or electromagnetic properties.
Theory
Infrared spectroscopy exploits the fact that molecules absorb specific frequencies that are characteristic of their structure. These absorptions are resonant frequencies, i.e. the frequency of the absorbed radiation matches the frequency of the bond or group that vibrates. The energies are determined by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling.
In particular, in the Born–Oppenheimer and harmonic approximations, i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular geometry, the resonant frequencies are determined by the normal modes corresponding to the molecular electronic ground state potential energy surface. Nevertheless, the resonant frequencies can be in a first approach related to the strength of the bond, and the mass of the atoms at either end of it. Thus, the frequency of the vibrations can be associated with a particular bond type.

Schematic diagram of IR spectroscopy

Number of vibrational modes
In order for a vibrational mode in a molecule to be "IR active," it must be associated with changes in the permanent dipole.
A molecule can vibrate in many ways, and each way is called a vibrational mode. Linear molecules have 3N–5 degrees of vibrational modes whereas nonlinear molecules have 3N–6 degrees of vibrational modes (also called vibrational degrees of freedom). As an example H2O, a non-linear molecule, will have 3×3–6 = 3 degrees of vibrational freedom, or modes.
Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Unsymmetrical diatomic molecules, e.g. CO, absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.
Special effects
The simplest and most important IR bands arise from the "normal modes," the simplest distortions of the molecule. In some cases, "overtone bands" are observed. These bands arise from the absorption of a photon that leads to a doubly excited vibrational state. Such bands appear at approximately twice the energy of the normal mode. Some vibrations, so-called 'combination modes," involve more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy, Fermi resonance results in an unexpected shift in energy and intensity of the bands.
Practical IR spectroscopy
The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. Examination of the transmitted light reveals how much energy was absorbed at each wavelength. This can be done with a monochromatic beam, which changes in wavelength over time, or by using a Fourier transform instrument to measure all wavelengths at once. From this, a transmittance or absorbance spectrum can be produced, showing at which IR wavelengths the sample absorbs. Analysis of these absorption characteristics reveals details about the molecular structure of the sample. When the frequency of the IR is the same as the vibrational frequency of a bond, absorption occurs.
This technique works almost exclusively on samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra. The technique has been used for the characterization of very complex mixtures.
Sample preparation
Gaseous samples require a sample cell with a long path length (typically 5–10 cm), to compensate for the diluteness.
Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used). The plates are transparent to the infrared light and do not introduce any lines onto the spectra.
Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually Nujol) in a marble or agate mortar, with a pestle. A thin film of the mull is smeared onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non hygroscopic solvent. A drop of this solution is deposited on surface of KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 µm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved.
It is important to note that spectra obtained from different sample preparation methods will look slightly different from each other due to differences in the samples' physical states.
Comparing to a reference
To take the infrared spectrum of a sample, it is necessary to measure both the sample and a "reference" (or "control"). This is because each measurement is affected by not only the light-absorption properties of the sample, but also the properties of the instrument (for example, what light source is used, what detector is used, etc.). The reference measurement makes it possible to eliminate the instrument influence. Mathematically, the sample transmission spectrum is divided by the reference transmission spectrum.
The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately).
A common way to compare to a reference is sequentially: First measure the reference, then replace the reference by the sample, then measure the sample. This technique is not perfectly reliable: If the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results.
Uses and applications
Infrared spectroscopy is widely used in both research and industry as a simple and reliable technique for measurement, quality control and dynamic measurement. It is also used in forensic analysis in both criminal and civil cases, enabling identification of polymer degradation for example.
The instruments are now small, and can be transported, even for use in field trials. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment). Some instruments will also automatically tell you what substance is being measured from a store of thousands of reference spectra held in storage.
By measuring at a specific frequency over time, changes in the character or quantity of a particular bond can be measured. This is especially useful in measuring the degree of polymerization in polymer manufacture. Modern research instruments can take infrared measurements across the whole range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate.
Infrared spectroscopy has been highly successful for applications in both organic and inorganic chemistry. Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc.
*************
5. Fluorescence spectroscopy
Fluorescence spectroscopy or fluorometry or spectrofluorometry is a type of electromagnetic spectroscopy which analyzes fluorescence from a sample. It involves using a beam of light, usually ultraviolet light, that excites the electrons in molecules of certain compounds and causes them to emit light of a lower energy, typically, but not necessarily, visible light. A complementary technique is absorption spectroscopy.
Devices that measure fluorescence are called fluorometers or fluorimeters.
Theory
Molecules have various states referred to as energy levels. Fluorescence spectroscopy is primarily concerned with electronic and vibrational states. Generally, the species being examined has a ground electronic state (a low energy state) of interest, and an excited electronic state of higher energy. Within each of these electronic states are various vibrational states.
In fluorescence spectroscopy, the species is first excited, by absorbing a photon, from its ground electronic state to one of the various vibrational states in the excited electronic state. Collisions with other molecules cause the excited molecule to lose vibrational energy until it reaches the lowest vibrational state of the excited electronic state.
The molecule then drops down to one of the various vibrational levels of the ground electronic state again, emitting a photon in the process. As molecules may drop down into any of several vibrational levels in the ground state, the emitted photons will have different energies, and thus frequencies. Therefore, by analysing the different frequencies of light emitted in fluorescent spectroscopy, along with their relative intensities, the structure of the different vibrational levels can be determined.
In a typical experiment, the different wavelengths of fluorescent light emitted by a sample are measured using a monochromator, holding the excitation light at a constant wavelength. This is called an emission spectrum. An excitation spectrum is the opposite, whereby the emission light is held at a constant wavelength, and the excitation light is scanned through many different wavelengths (via a monochromator). An Emission Map is measured by recording a number of different emission spectra created from different excitation wavelengths, and combining them all together. This is a three dimensional data set, intensity of emission as a function of excitation and emission wavelengths, and is typically depicted as a contour map.
Instrumentation
Two general types of instruments exist:
    Filter fluorometers use filters to isolate the incident light and fluorescent light.
    Spectrofluorometers use diffraction grating monochromators to isolate the incident light and fluorescent light.
Both types utilize the following scheme: The light from an excitation source passes through a filter or monochromator, and strikes the sample. A proportion of the incident light is absorbed by the sample, and some of the molecules in the sample fluoresce. The fluorescent light is emitted in all directions. Some of this fluorescent light passes through a second filter or monochromator and reaches a detector, which is usually placed at 90° to the incident light beam to minimize the risk of transmitted or reflected incident light reaching the detector.

Schematic diagram of florescence spectroscopy



Various light sources may be used as excitation sources, including lasers, photodiodes, and lamps; xenon arcs and mercury-vapor lamps in particular. A laser only emits light of high irradiance at a very narrow wavelength interval, typically under 0.01 nm, which makes an excitation monochromator or filter unnecessary. The disadvantage of this method is that the wavelength of a laser cannot be changed by much. A mercury vapor lamp is a line lamp, meaning it emits light near peak wavelengths. By contrast, a xenon arc has a continuous emission spectrum with nearly constant intensity in the range from 300-800 nm and a sufficient irradiance for measurements down to just above 200 nm.
Filters and/or monochromators may be used in fluorimeters. A monochromator transmits light of an adjustable wavelength with an adjustable tolerance. The most common type of monochromator utilizes a diffraction grating, that is, collimated light illuminates a grating and exits with a different angle depending on the wavelength. The monochromator can then be adjusted to select which wavelengths to transmit. For allowing anisotropy measurements the addition of two polarization filters are necessary: One after the excitation monochromator or filter, and one before the emission monochromator or filter.
As mentioned before, the fluorescence is most often measured at a 90° angle relative to the excitation light. This geometry is used instead of placing the sensor at the line of the excitation light at a 180° angle in order to avoid interference of the transmitted excitation light. No monochromator is perfect and it will transmit some stray light, that is, light with other wavelengths than the targeted. An ideal monochromator would only transmit light in the specified range and have a high wavelength-independent transmission. When measuring at a 90 angle, only the light scattered by the sample causes stray light. This results in a better signal-to-noise ratio, and lowers the detection limit by approximately a factor 10000, when compared to the 180° geometry. Furthermore, the fluorescence can also be measured from the front, which is often done for turbid or opaque samples.
The detector can either be single-channeled or multichanneled. The single-channeled detector can only detect the intensity of one wavelength at a time, while the multichanneled detects the intensity at all wavelengths simultaneously, making the emission monochromator or filter unnecessary. The different types of detectors have both advantages and disadvantages.
The most versatile fluorimeters with dual monochromators and a continuous excitation light source can record both an excitation spectrum and a fluorescence spectrum. When measuring fluorescence spectra, the wavelength of the excitation light is kept constant, preferably at a wavelength of high absorption, and the emission monochromator scans the spectrum. For measuring excitation spectra, the wavelength passing though the emission filter or monochromator is kept constant and the excitation monochromator is scanning. The excitation spectrum generally is identical to the absorption spectrum as the fluorescence intensity is proportional to the absorption. [4]
Analysis of data
At low concentrations the fluorescence intensity will generally be proportional to the concentration of the fluorophore.
Unlike in UV/visible spectroscopy, ‘standard’, device independent spectra are not easily attained. Several factors influence and distort the spectra, and corrections are necessary to attain ‘true’, i.e. machine-independent, spectra. The different types of distortions will here be classified as being either instrument- or sample-related. Firstly, the distortion arising from the instrument is discussed. As a start, the light source intensity and wavelength characteristics varies over time during each experiment and between each experiment. Furthermore, no lamp has a constant intensity at all wavelengths. To correct this, a beam splitter can be applied after the excitation monochromator or filter to direct a portion of the light to a reference detector.
Additionally, the transmission efficiency of monochromators and filters must be taken into account. These may also change over time. The transmission efficiency of the monochromator also varies depending on wavelength. This is the reason that an optional reference detector should be placed after the excitation monochromator or filter. The percentage of the fluorescence picked up by the detector is also dependent upon the system. Furthermore, the detector quantum efficiency, that is, the percentage of photons detected, varies between different detectors, with wavelength and with time, as the detector inevitably deteriorates.
Correction of all these instrumental factors for getting a ‘standard’ spectrum is a tedious process, which is only applied in practice when it is strictly necessary. This is the case when measuring the quantum yield or when finding the wavelength with the highest emission intensity for instance.
Other aspects to consider are the inner filter effects. These include reabsorption. Reabsorption happens because another molecule or part of a macromolecule absorbs at the wavelengths at which the fluorophore emits radiation. If this is the case, some or all of the photons emitted by the fluorophore may be absorbed again. Another inner filter effect occurs because of high concentrations of absorbing molecules, including the fluorophore. The result is that the intensity of the excitation light is not constant throughout the solution. Resultingly, only a small percentage of the excitation light reaches the fluorophores that are visible for the detection system. The inner filter effects change the spectrum and intensity of the emitted light and they must therefore be considered when analysing the emission spectrum of fluorescent light.
Tryptophan Fluorescence
The fluorescence of a folded protein is a mixture of the fluorescence from individual aromatic residues. Most of the intrinsic fluorescence emissions of a folded protein are due to excitation of tryptophan residues, with some emissions due to tyrosine and phenylalanine; but disulfide bonds also have appreciable absorption in this wavelength range. Typically, tryptophan has a wavelength of maximum absorption of 280 nm and an emission peak that is solvatochromic, ranging from ca. 300 to 350 nm depending in the polarity of the local environment [8]Hence, protein fluorescence may be used as a diagnostic of the conformational state of a protein.[9] Furthermore, tryptophan fluorescence is strongly influenced by the proximity of other residues (i.e., nearby protonated groups such as Asp or Glu can cause quenching of Trp fluorescence). Also, energy transfer between tryptophan and the other fluorescent amino acids is possible, which would affect the analysis, especially in cases where the Forster acidic approach is taken. In addition, tryptophan is a relatively rare amino acid; many proteins contain only one or a few tryptophan residues. Therefore, tryptophan fluorescence can be a very sensitive measurement of the conformational state of individual tryptophan residues. The advantage compared to extrinsic probes is that the protein itself is not changed. The use of intrinsic fluorescence for the study of protein conformation is in practice limited to cases with few (or perhaps only one) tryptophan residues, since each experiences a different local environment, which gives rise to different emission spectra.
Tryptophan is an important intrinsic fluorescent probe (amino acid), which can be used to estimate the nature of microenvironment of the tryptophan. When performing experiments with denaturants, surfactants or other amphiphilic molecules, the microenvironment of the tryptophan might change. For example, if a protein containing a single tryptophan in its 'hydrophobic' core is denatured with increasing temperature, a red-shift emission spectrum will appear. This is due to the exposure of the tryptophan to an aqueous environment as opposed to a hydrophobic protein interior. In contrast, the addition of a surfactant to a protein which contains a tryptophan which is exposed to the aqueous solvent will cause a blue shifted emission spectrum if the tryptophan is embedded in the surfactant vesicle or micelle [10]. Proteins that lack tryptophan may be coupled to a fluorophore.
At 295 nm, the tryptophan emission spectrum is dominant over the weaker tyrosine and phenylalanine fluorescence.
Applications
Fluorescence spectroscopy is used in, among others, biochemical, medical, and chemical research fields for analyzing organic compounds. There has also been a report of its use in differentiating malignant, bashful skin tumors from benign.
Atomic Fluorescence Spectroscopy (AFS) techniques are useful in other kinds of analysis/measurement of a compound present in air or water, or other media, such as CVAFS which is used for heavy metals detection, such as mercury.
Fluorescence can also be used to redirect photons, see fluorescent solar collector.
***************
6. Atomic absorption spectroscopy
In analytical chemistry, atomic absorption spectroscopy is a technique used to determine the concentration of a specific metal element in a sample. The technique can be used to analyze the concentration of over 70 different metals in a solution.
Although atomic absorption spectroscopy dates to the nineteenth century, the modern form was largely developed during the 1950s by a team of Australian chemists. They were led by Alan Walsh and worked at the CSIRO (Commonwealth Science and Industry Research Organisation) Division of Chemical Physics in Melbourne, Australia.
Principle
The technique makes use of absorption spectrometry to assess the concentration of an analyte in a sample. It therefore relies heavily on the Beer-Lambert law.
In short, the electrons of the atoms in the atomizer can be promoted to higher orbital for a short amount of time by absorbing a set quantity of energy (i.e. light of a given wavelength). This amount of energy (or wavelength) is specific to a particular electron transition in a particular element, and in general, each wavelength corresponds to only one element. This gives the technique its elemental selectivity.
As the quantity of energy (the power) put into the flame is known, and the quantity remaining at the other side (at the detector) can be measured, it is possible, from Beer-Lambert law, to calculate how many of these transitions took place, and thus get a signal that is proportional to the concentration of the element being measured.
Instrumentation
In order to analyze a sample for its atomic constituents, it has to be atomized. The sample should then be illuminated by light. The light transmitted is finally measured by a detector. In order to reduce the effect of emission from the atomizer (e.g. the black body radiation) or the environment, a spectrometer is normally used between the atomizer and the detector.
 

Types of atomizer
The technique typically makes use of a flame to atomize the sample, but other atomizers such as a graphite furnace or plasmas, primarily inductively coupled plasmas, are also used.
When a flame is used it is laterally long (usually 10 cm) and not deep. The height of the flame above the burner head can be controlled by adjusting the flow of the fuel mixture. A beam of light passes through this flame at its longest axis (the lateral axis) and hits a detector.
Analysis of liquids
A liquid sample is normally turned into an atomic gas in three steps:
1.    Desolvation (Drying) – the liquid solvent is evaporated, and the dry sample remains
2.    Vaporization (Ashing) – the solid sample vaporises to a gas
3.    Atomization – the compounds making up the sample are broken into free atoms.
Radiation sources
The radiation source chosen has a spectral width narrower than that of the atomic transitions.
Hollow cathode lamps
Hollow cathode lamps are the most common radiation source in atomic absorption spectroscopy. Inside the lamp, filled with argon or neon gas, is a cylindrical metal cathode containing the metal for excitation, and an anode. When a high voltage is applied across the anode and cathode, gas particles are ionized. As voltage is increased, gaseous ions acquire enough energy to eject metal atoms from the cathode. Some of these atoms are in an excited state and emit light with the frequency characteristic to the metal. Many modern hollow cathode lamps are selective for several metals.
Diode lasers
Atomic absorption spectroscopy can also be performed by lasers, primarily diode lasers because of their good properties for laser absorption spectrometry. The technique is then either referred to as diode laser atomic absorption spectrometry (DLAAS or DLAS), or, since wavelength modulation most often is employed, wavelength modulation absorption spectrometry.
Background correction methods
The narrow bandwidth of hollow cathode lamps makes spectral overlap rare. That is, it is unlikely that an absorption line from one element will overlap with another. Molecular emission is much broader, so it is more likely that some molecular absorption band will overlap with an atomic line. This can result in artificially high absorption and an improperly high calculation for the concentration in the solution. Three methods are typically used to correct for this:
    Zeeman correction - A magnetic field is used to split the atomic line into two sidebands (see Zeeman effect). These sidebands are close enough to the original wavelength to still overlap with molecular bands, but are far enough not to overlap with the atomic bands. The absorption in the presence and absence of a magnetic field can be compared, the difference being the atomic absorption of interest.
    Smith-Hieftje correction (invented by Stanley B. Smith and Gary M. Hieftje) - The hollow cathode lamp is pulsed with high current, causing a larger atom population and self-absorption during the pulses. This self-absorption causes a broadening of the line and a reduction of the line intensity at the original wavelength.
    Deuterium lamp correction - In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background emission. The use of a separate lamp makes this method the least accurate, but its relative simplicity (and the fact that it is the oldest of the three) makes it the most commonly used method.
Modern developments
Recent modern developments in electronics and solid state detectors have taken the conventional AAS instrument to the next level. High Resolution Continuum Source AAS (HR-CS AAS) is now available in both flame and graphite furnace mode.
Main features of the new instruments:
    Single xenon arc lamp - Today, multiple hollow cathode lamps are no longer used. With the use of a single xenon arc lamp, all the elements can be measured from 185-900nm. This takes AAS into a true multi-element technique with the analysis of 10 elements per minute.
    CCD technology - For the first time in an AAS CCD chips are now available with 200 pixels which act as independent detectors.
    Simultaneous background correction - Background is now measured simultaneously compared to sequential background on conventional AAS.
    Multiple lines - Extra lines of an analyte are now available thus extending the dynamic working range.
    Better detection limits - Due to the high intensity of the Xenon Lamp there is better signal/noise ratio thus giving better detection limits. In some cases it is up to 10 times better than conventional AAS.
    Direct analysis of solids - In graphite furnace mode it is now possible to analyse solids directly thus avoiding long digestion times.
    Ability to measure sulfur and halogens - It is now possible to measure some non-metals by measuring molecular bands.
Alternatives
For heavy metals such as mercury, alternatives to atomic absorption technology, including direct and pre-concentrated atomic fluorescence techniques such as cold vapour atomic fluorescence spectroscopy can offer higher sensitivity than atomic absorption.
**************** 









UNIT – V
BIOMEDICAL INSTRUMENTATION

5.1 Principle and application of electrophoresis
Electrophoresis is the motion of dispersed particles relative to a fluid under the influence of a spatially uniform electric field. This electro kinetic phenomenon was observed for the first time in 1807 by Reuss, who noticed that the application of a constant electric field caused clay particles dispersed in water to migrate. It is ultimately caused by the presence of a charged interface between the particle surface and the surrounding fluid.
5.1.1 Theory
The dispersed particles have an electric surface charge, on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force.
Considering the hydrodynamic friction on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as:
          v
µe = ------
          E
Electrophoresis Unit



5.1.2 Affinity electrophoresis 
Affinity electrophoresis is a general name for many analytical methods used in biochemistry and biotechnology. Both qualitative and quantitative information may be obtained through affinity electrophoresis. The methods include the so-called mobility shift electrophoresis, charge shift electrophoresis and affinity capillary electrophoresis. The methods are based on changes in the electrophoretic pattern of molecules (mainly macromolecules) through biospecific interaction or complex formation. The interaction or binding of a molecule, charged or uncharged, will normally change the electrophoretic properties of a molecule. Membrane proteins may be identified by a shift in mobility induced by a charged detergent. Nucleic acids or nucleic acid fragments may be characterized by their affinity to other molecules. The methods has been used for estimation of binding constants, as for instance in lectin affinity electrophoresis or characterization of molecules with specific features like glycan content or ligand binding. For enzymes and other ligand-binding proteins, one dimensional electrophoresis similar to counter electrophoresis or to "rocket immunoelectrophoresis", affinity electrophoresis may be used as an alternative quantification of the protein. Some of the methods are similar to affinity chromatography by use of immobilized ligands.

5.1.3 DNA electrophoresis
DNA electrophoresis is an analytical technique used to separate DNA fragments by size. DNA molecules which are to be analyzed are set upon a viscous medium, the gel, where an electric field forces the DNA to migrate toward the positive potential, the anode, due to the net negative charge of the phosphate backbone of the DNA chain. The separation of these fragments is accomplished by exploiting the mobilities with which different sized molecules are able to traverse the gel. Longer molecules migrate more slowly because they experience more drag within the gel. Because the size of the molecule affects its mobility, smaller fragments end up nearer to the anode than longer ones in a given period. After some time, the voltage is removed and the fragmentation gradient is analyzed. For larger separations between similar sized fragments, either the voltage or run time can be increased. Extended runs across a low voltage gel yield the most accurate resolution.
The DNA analyzed by gel electrophoresis can be prepared in several ways before separation by electrophoresis. In the case of large DNA molecules, the DNA is frequently cut into smaller fragments using a DNA restriction endonuclease. In other instances, such as PCR amplified samples, enzymes present in the sample that might affect the separation of the molecules are removed through various means before analysis. Once the DNA is properly prepared, the samples of the DNA solution are placed in the wells of the gel and a voltage is applied across the gel for a specified amount of time.
The DNA fragments of different lengths are visualized using a fluorescent dye specific for DNA, such as ethidium bromide. The gel shows bands corresponding to different DNA molecules populations with different molecular weight. Fragment size is usually reported in "nucleotides", "base pairs" or "kb" (for thousands of base pairs) depending upon whether single- or double-stranded DNA has been separated. Fragment size determination is typically done by comparison to commercially available DNA markers containing linear DNA fragments of known length.
The types of gel most commonly used for DNA electrophoresis are agarose (for relatively long DNA molecules) and polyacrylamide (for high resolution of short DNA molecules, for example in DNA sequencing). Gels have conventionally been run in a "slab" format such as that shown in the figure, but capillary electrophoresis has become important for applications such as high-throughput DNA sequencing. Electrophoresis techniques used in the assessment of DNA damage include alkaline gel electrophoresis and pulsed field gel electrophoresis. The measurement and analysis are mostly done with specialized gel analysis software. Capillary electrophoresis results are typically displayed in a trace view called an electropherogram.

5.1.4 Gel electrophoresis
Gel electrophoresis is a technique used for the separation of deoxyribonucleic acid (DNA),ribonucleic acid (RNA), or protein molecules using an electric field applied to a gel matrix.[1]DNA Gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via PCR, but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or Southern blotting for further characterization.
Separation
The term "gel" in this instance refers to the matrix used to contain and then separate the target molecules. In most cases, the gel is a cross linked polymer whose composition and porosity is chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrate without cross links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes.
"Electrophoresis" refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge to mass ratio (Z) of all species is uniform, toward the anode if negatively charged or toward the cathode if positively charged.[2]
In simple terms: Electrophoresis is a procedure which enables the sorting of molecules based on size and charge. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agar. The molecules being sorted move through the space in gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric current is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel.
Visualization
After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. Ethidium bromide, silver, or Coomassie Brilliant Blue dye may be used for this process. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the analyte molecules fluoresce under ultraviolet light, a photograph can be taken of the gel under ultraviolet lighting conditions, often using a Gel Doc. If the molecules to be separated contain radioactivity added for visibility, an autoradiogram can be recorded of the gel.
If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or to indistinguishable smears representing multiple unresolved components.
Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel with the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown in order to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule.
5.1.5 Applications
Electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software.
Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications.
***************
5.2  Polymerase Chain Reaction
The polymerase chain reaction (PCR) is a technique in molecular biology to amplify a single or few copies of a piece of DNA across several orders of magnitude, generating thousands to millions of copies of a particular DNA sequence. The method relies on thermal cycling, consisting of cycles of repeated heating and cooling of the reaction for DNA melting and enzymatic replication of the DNA. Primers (short DNA fragments) containing sequences complementary to the target region along with a DNA polymerase (after which the method is named) are key components to enable selective and repeated amplification. As PCR progresses, the DNA generated is itself used as a template for replication, setting in motion a chain reaction in which the DNA template is exponentially amplified. PCR can be extensively modified to perform a wide array of genetic manipulations.
Almost all PCR applications employ a heat-stable DNA polymerase, such as Taq polymerase, an enzyme originally isolated from the bacterium Thermus aquaticus. This DNA polymerase enzymatically assembles a new DNA strand from DNA building blocks, the nucleotides, by using single-stranded DNA as a template and DNA oligonucleotides (also called DNA primers), which are required for initiation of DNA synthesis. The vast majority of PCR methods use thermal cycling, i.e., alternately heating and cooling the PCR sample to a defined series of temperature steps. These thermal cycling steps are necessary first to physically separate the two strands in a DNA double helix at a high temperature in a process called DNA melting. At a lower temperature, each strand is then used as the template in DNA synthesis by the DNA polymerase to selectively amplify the target DNA. The selectivity of PCR results from the use of primers that are complementary to the DNA region targeted for amplification under specific thermal cycling conditions.
Developed in 1983 by Kary Mullis, PCR is now a common and often indispensable technique used in medical and biological research labs for a variety of applications. These include DNA cloning for sequencing, DNA-based phylogeny, or functional analysis of genes; the diagnosis of hereditary diseases; the identification of genetic fingerprints (used in forensic sciences and paternity testing); and the detection and diagnosis of infectious diseases. In 1993, Mullis was awarded the Nobel Prize in Chemistry for his work on PCR.
PCR principles and procedure
PCR is used to amplify a specific region of a DNA strand (the DNA target). Most PCR methods typically amplify DNA fragments of up to ~10 kilo base pairs (kb), although some techniques allow for amplification of fragments up to 40 kb in size.
A basic PCR set up requires several components and reagents.[6] These components include:
    DNA template that contains the DNA region (target) to be amplified.
    Two primers that are complementary to the 3' (three prime) ends of each of the sense and anti-sense strand of the DNA target.
    Taq polymerase or another DNA polymerase with a temperature optimum at around 70 °C.
    Deoxynucleoside triphosphates (dNTPs; also very commonly and erroneously called deoxynucleotide triphosphates), the building blocks from which the DNA polymerases synthesizes a new DNA strand.
    Buffer solution, providing a suitable chemical environment for optimum activity and stability of the DNA polymerase.
    Divalent cations, magnesium or manganese ions; generally Mg2+ is used, but Mn2+ can be utilized for PCR-mediated DNA mutagenesis, as higher Mn2+ concentration increases the error rate during DNA synthesis.
    Monovalent cation potassium ions.
The PCR is commonly carried out in a reaction volume of 10–200 μl in small reaction tubes (0.2–0.5 ml volumes) in a thermal cycler. The thermal cycler heats and cools the reaction tubes to achieve the temperatures required at each step of the reaction (see below). Many modern thermal cyclers make use of the Peltier effect which permits both heating and cooling of the block holding the PCR tubes simply by reversing the electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibration. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermocyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube.
Procedure


Schematic drawing of the PCR cycle. 1.  Denaturing at 94-96ºC.  2.  Annealing at 65ºC 3.  Elongation at 72ºC. 
Typically, PCR consists of a series of 20-40 repeated temperature changes, called cycles, with each cycle commonly consisting of 2-3 discrete temperature steps, usually three. The cycling is often preceded by a single temperature step (called hold) at a high temperature (>90°C), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters. These include the enzyme used for DNA synthesis, the concentration of divalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers.
    Initialization step: This step consists of heating the reaction to a temperature of 94–96 °C (or 98 °C if extremely thermostable polymerases are used), which is held for 1–9 minutes. It is only required for DNA polymerases that require heat activation by hot-start PCR.
    Denaturation step: This step is the first regular cycling event and consists of heating the reaction to 94–98 °C for 20–30 seconds. It causes DNA melting of the DNA template by disrupting the hydrogen bonds between complementary bases, yielding single-stranded DNA molecules.
    Annealing step: The reaction temperature is lowered to 50–65 °C for 20–40 seconds allowing annealing of the primers to the single-stranded DNA template. Typically the annealing temperature is about 3-5 degrees Celsius below the Tm of the primers used. Stable DNA-DNA hydrogen bonds are only formed when the primer sequence very closely matches the template sequence. The polymerase binds to the primer-template hybrid and begins DNA synthesis.
    Extension/elongation step: The temperature at this step depends on the DNA polymerase used; Taq polymerase has its optimum activity temperature at 75–80 °C, and commonly a temperature of 72 °C is used with this enzyme. At this step the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding dNTPs that are complementary to the template in 5' to 3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxyl group at the end of the nascent (extending) DNA strand. The extension time depends both on the DNA polymerase used and on the length of the DNA fragment to be amplified. As a rule-of-thumb, at its optimum temperature, the DNA polymerase will polymerize a thousand bases per minute. Under optimum conditions, i.e., if there are no limitations due to limiting substrates or reagents, at each extension step, the amount of DNA target is doubled, leading to exponential (geometric) amplification of the specific DNA fragment.
    Final elongation: This single step is occasionally performed at a temperature of 70–74 °C for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully extended.
    Final hold: This step at 4–15 °C for an indefinite time may be employed for short-term storage of the reaction.

Ethidium bromide-stained PCR products after gel electrophoresis.
No amplification is present in sample #1; DNA bands in sample #2 and #3 indicate successful amplification of the target sequence. The gel also shows a positive control, and a DNA ladder containing DNA fragments of defined length for sizing the bands in the experimental PCRs.
To check whether the PCR generated the anticipated DNA fragment (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis is employed for size separation of the PCR products. The size(s) of PCR products is determined by comparison with a DNA ladder (a molecular weight marker), which contains DNA fragments of known size, run on the gel alongside the PCR products.
PCR stages
The PCR process can be divided into three stages:
Exponential amplification: At every cycle, the amount of product is doubled (assuming 100% reaction efficiency). The reaction is very sensitive: only minute quantities of DNA need to be present.[12]
Leveling off stage: The reaction slows as the DNA polymerase loses activity and as consumption of reagents such as dNTPs and primers causes them to become limiting.
Plateau: No more product accumulates due to exhaustion of reagents and enzyme.
PCR optimization
In practice, PCR can fail for various reasons, in part due to its sensitivity to contamination causing amplification of spurious DNA products. Because of this, a number of techniques and procedures have been developed for optimizing PCR conditions.[13][14] Contamination with extraneous DNA is addressed with lab protocols and procedures that separate pre-PCR mixtures from potential DNA contaminants.[6] This usually involves spatial separation of PCR-setup areas from areas for analysis or purification of PCR products, use of disposable plasticware, and thoroughly cleaning the work surface between reaction setups. Primer-design techniques are important in improving PCR product yield and in avoiding the formation of spurious products, and the usage of alternate buffer components or polymerase enzymes can help with amplification of long or otherwise problematic regions of DNA.
Application of PCR
Selective DNA isolation
PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a specific region of DNA. This use of PCR augments many methods, such as generating hybridization probes for Southern or northern hybridization and DNA cloning, which require larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques with high amounts of pure DNA, enabling analysis of DNA samples even from very small amounts of starting material.
Other applications of PCR include DNA sequencing to determine unknown PCR-amplified sequences in which one of the amplification primers may be used in Sanger sequencing, isolation of a DNA sequence to expedite recombinant DNA technologies involving the insertion of a DNA sequence into a plasmid or the genetic material of another organism. Bacterial colonies (E.coli) can be rapidly screened by PCR for correct DNA vector constructs.[15] PCR may also be used for genetic fingerprinting; a forensic technique used to identify a person or organism by comparing experimental DNAs through different PCR-based methods.
Some PCR 'fingerprints' methods have high discriminative power and can be used to identify genetic relationships between individuals, such as parent-child or between siblings, and are used in paternity testing. This technique may also be used to determine evolutionary relationships among organisms.
Amplification and quantification of DNA
Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze extremely small amounts of sample. This is often critical for forensic analysis, when only a trace amount of DNA is available as evidence. PCR may also be used in the analysis of ancient DNA that is tens of thousands of years old. These PCR-based techniques have been successfully used on animals, such as a forty-thousand-year-old mammoth, and also on human DNA, in applications ranging from the analysis of Egyptian mummies to the identification of a Russian tsar.
Quantitative PCR methods allow the estimation of the amount of a given sequence present in a sample—a technique often applied to quantitatively determine levels of gene expression. Real-time PCR is an established tool for DNA quantification that measures the accumulation of DNA product after each round of PCR amplification.  Use of DNA in forensic entomology
PCR in diagnosis of diseases
PCR permits early diagnosis of malignant diseases such as leukemia and lymphomas, which is currently the highest developed in cancer research and is already being used routinely. PCR assays can be performed directly on genomic DNA samples to detect translocation-specific malignant cells at a sensitivity which is at least 10,000 fold higher than other methods.
PCR also permits identification of non-cultivatable or slow-growing microorganisms such as mycobacteria, anaerobic bacteria, or viruses from tissue culture assays and animal models. The basis for PCR diagnostic applications in microbiology is the detection of infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific genes.]
Viral DNA can likewise be detected by PCR. The primers used need to be specific to the targeted sequences in the DNA of a virus, and the PCR can be used for diagnostic analyses or DNA sequencing of the viral genome. The high sensitivity of PCR permits virus detection soon after infection and even before the onset of disease. Such early detection may give physicians a significant lead in treatment. The amount of virus ("viral load") in a patient can also be quantified by PCR-based DNA quantitation techniques
**************** 
5.3 Isotopes and their importance
Isotopes are different types of atoms (nuclides) of the same chemical element, each having a different number of neutrons. In a corresponding manner, isotopes differ in mass number (or number of nucleons) but never in atomic number. The number of protons (the atomic number) is the same because that is what characterizes a chemical element. For example, carbon-12, carbon-13 and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13 and 14, respectively. The atomic number of carbon is 6, so the neutron numbers in these isotopes of carbon are therefore 12−6 = 6, 13−6 = 7, and 14–6 = 8, respectively.
A nuclide is an atomic nucleus with a specified composition of protons and neutrons. The nuclide concept emphasizes nuclear properties over chemical properties, while the isotope concept emphasizes chemical over nuclear. The neutron number has drastic effects on nuclear properties, but negligible effects on chemical properties. Since isotope is the older term, it is better known, and is still sometimes used in contexts where nuclide might be more appropriate, such as nuclear technology.
An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number implicitly) followed by a hyphen and the mass number (e.g. helium-3, carbon-12, carbon-13, iodine-131 and uranium-238). When a chemical symbol is used, e.g., "C" for carbon, standard notation is to indicate the number of nucleons with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. 32He, 42He, 126C, 146C, 23592U, and 23992U).
Some isotopes are radioactive and are therefore described as radioisotopes or radionuclides, while others have never been observed to undergo radioactive decay and are described as stable isotopes. For example, 14C is a radioactive form of carbon while 12C and 13C are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 288 are primordial nuclides. These include 31 nuclides with very long half lives (over 80 million years) and 257 which are formally considered as "stable". About 30 of these "stable" isotopes have actually been observed to decay, but with half lives too long to be estimated so far. This leaves 227 nuclides that have not been observed to decay at all.  Many other stable nuclides are in theory energetically susceptible to other known forms of decay such as alpha decay or double beta decay, but no decay has yet been observed. The half lives for these processes often exceed a million times the estimated age of the universe.
Importance
Several applications exist that capitalize on properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual since it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectroscopy.
Use of chemical and biological properties
    Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. For biogenic substances in particular, significant variations of isotopes of C, N and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration of food products. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them.
    Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, they can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).
    A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known level of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials.
    Isotopic substitution can be used to determine the mechanism of a reaction via the kinetic isotope effect.
Use of nuclear properties
    Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common isotopes used with NMR spectroscopy are1H, 2D, 15N, 13C, and 31P.
    Mossbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe.
    Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes.

5.3.1    GM counters
A Geiger–Muller tube (or GM tube) is the sensing element of a Geiger counter instrument that can detect a single particle of ionizing radiation, and typically produce an audible click for each. It was named for Hans Geiger who invented the device in 1908 and Walther Muller who collaborated with Geiger in developing it further in 1928. It is a type of gaseous ionization detector with an operating voltage in the Geiger plateau. The Geiger counter is sometimes used as a hardware random number generator.



Description and operation
A Geiger–Muller tube consists of a tube filled with a low-pressure (~0.1 Atm) inert gas such as helium, neon or argon (usually neon), in some cases in a Penning mixture, and an organic vapor or a halogen gas. The tube contains electrodes, between which there is a potential difference of several hundred volts, but no current flowing. The walls of the tube are either entirely metal or have their inside surface coated with a conductor to form the cathode while the anode is a wire passing up the center of the tube.
When ionizing radiation passes through the tube, some of the gas molecules are ionized, creating positively charged ions, and electrons. The strong electric field created by the tube's electrodes accelerates the ions towards the cathode and the electrons towards the anode. The ion pairs gain sufficient energy to ionize further gas molecules through collisions on the way, creating an avalanche of charged particles.  This results in a short, intense pulse of current which passes (or cascades) from the negative electrode to the positive electrode and is measured or counted.
Most detectors include an audio amplifier that produces an audible click on discharge. The number of pulses per second measures the intensity of the radiation field. Some Geiger counters display an exposure rate (e.g. mR•h), but this does not relate easily to a dose rate as the instrument does not discriminate between radiation of different energies.
GM tubes
The usual form of tube is an end-window tube. This type is so-named because the tube has a window at one end through which ionizing radiation can easily penetrate. The other end normally has the electrical connectors. There are two types of end-window tubes: the glass-mantle type and the mica window type. The glass window type will not detect alpha radiation since it is unable to penetrate the glass, but is usually cheaper and will usually detect beta radiation and X-rays. The mica window type will detect alpha radiation but is more fragile.
Most tubes will detect gamma radiation, and usually beta radiation above about 2.5 MeV. Geiger–Muller tubes will not normally detect neutrons since these do not ionize the gas. However, neutron-sensitive tubes can be produced which either have the inside of the tube coated with boron or contain boron trifluoride or helium-3gas. The neutrons interact with the boron nuclei, producing alpha particles or with the helium-3 nuclei producing hydrogen and tritium ions and electrons. These charged particles then trigger the normal avalanche process.
Although most tubes will detect gamma radiation, standard tubes are relatively inefficient, as most gamma photons will pass through the low density gas without interacting. Using the heavier noble gases krypton or xenon for the fill effects a small improvement, but dedicated gamma detectors use dense cathodes of lead or stainless steel in windowless tubes. The dense cathode then interacts with the gamma flux, producing high-energy electrons, which are then detected.
Quenching
The GM tube must produce a single pulse on entry of a single particle. It must not give any spurious pulses, and must recover quickly to the passive state. Unfortunately for these requirements, the positive argon ions that eventually strike the cathode become neutral argon atoms in an excited state by gaining electrons from the cathode. The excited atoms return to the ground state by emitting photons and these photons cause avalanches and hence spurious pulse discharge. Quenching of this process is thus important because a single particle entering the tube is counted by a single discharge, and so the tube is unable to re-set and detect another particle until the discharge has been stopped. Also, the tube is damaged by prolonged discharges.
External quenching uses external electronics to remove the high voltage between the electrodes. Self-quenching or internal-quenching tubes stop the discharge without external assistance, by the addition of a small amount of a polyatomic organic vapor such as butane or ethanol; or alternatively a halogen such as bromine or chlorine.
If a poor diatomic gas quencher is introduced to the tube, the positive argon ions, during their motion toward the cathode, would have multiple collisions with the quencher gas molecules and transfer their charge and some energy to them. Thus, neutral argon atoms would be produced and the quencher gas ions in their turn would reach the cathode, gain electrons there from, and move into excited states which would decay by photon emission, producing tube discharge. However, effective quencher molecules, when excited, lose their energy not by photon emission, but by dissociation into neutral quencher molecules. No spurious pulses are thus produced.
Invention of halogen tubes
The halogen tubes were invented by Sidney H. Liebson in 1947, and are now the most common form, since the discharge mechanism takes advantage of the metastable state of the inert gas atom to ionize the halogen molecule and produces a more efficient discharge which permits it to operate at much lower voltages, typically 400–600 volts instead of 900–1200 volts. It also has a longer life because the halogen ions can recombine whilst the organic vapor cannot and is gradually destroyed by the discharge process (giving the latter a life of around 108 events).
************** 
5.4    Scintillation counting
A scintillation counter measures ionizing radiation. The sensor, called a scintillator, consists of a transparent crystal, usually phosphor, plastic (usually containing anthracene), or organic liquid (see liquid scintillation counting) that fluoresces when struck by ionizing radiation. A sensitive photomultiplier tube (PMT) measures the light from the crystal. The PMT is attached to an electronic amplifier and other electronic equipment to count and possibly quantify the amplitude of the signals produced by the photomultiplier.
The scintillation counter was invented in 1944 by Sir Samuel Curran whilst he was working on the Manhattan Project at the University of California at Berkeley, and it is based on the earlier work of Antoine Henri Becquerel, who is generally credited with discovering radioactivity, whilst working on the phosphorescence of certain uranium salts (in 1896). Scintillation counters are widely used because they can be made inexpensively yet with good quantum efficiency. The quantum efficiency of a gamma-ray detector (per unit volume) depends upon the density of electrons in the detector, and certain scintillating materials, such as sodium iodide and bismuth germanate, achieve high electron densities as a result of the high atomic numbers of some of the elements of which they are composed. However, detectors based on semiconductors, notably hyper pure germanium, have better intrinsic energy resolution than scintillator, and are preferred where feasible for gamma-ray spectrometry. In the case of neutron detectors, high efficiency is gained through the use of scintillating materials rich in hydrogen that scatter neutrons efficiently. Liquid scintillation counters are an efficient and practical means of quantifying beta radiation.
Scintillation counter apparatus
When a charged particle strikes the scintillator, a flash of light is produced, which may or may not be in the visible region of the spectrum. Each charged particle produces a flash. If a flash is produced in a visible region, it can be observed through a microscope and counted - an impractical method. The association of a scintillator and photomultiplier with the counter circuits forms the basis of the scintillation counter apparatus. When a charged particle passes through the phosphor, some of the phosphor's atoms get excited and emit photons. The intensity of the light flash depends on the energy of the charged particles. Cesium iodide (CsI) in crystalline form is used as the scintillator for the detection of protons and alpha particles; sodium iodide (NaI) containing a small amount of thallium is used as a scintillator for the detection of gamma waves.
The scintillation counter has a layer of phosphor cemented in one of the ends of the photomultiplier. Its inner surface is coated with a photo-emitter with less work potential. This photoelectric emitter is called as photocathode and is connected to the negative terminal of a high tension battery. A number of electrodes called dynodes are arranged in the tube at increasing positive potential. When a charged particle strikes the phosphor, a photon is emitted. This photon strikes the photocathode in the photomultipier, releasing an electron. This electron accelerates towards the first dynode and hits it. Multiple secondary electrons are emitted, which accelerate towards the second dynode. More electrons are emitted and the chain continues, multiplying the effect of the first charged particle. By the time the electrons reach the last dynode, enough have been released to send a voltage pulse across the external resistors. This voltage pulse is amplified and recorded by the electronic counter.



Applications for Scintillation counters
The scintillation counters can be used in a variety of applications.
    Medical imaging
    National and homeland security
    Border security
    Nuclear safety
Since 9/11, several security situations have emerged where detection of radioactive material, emitting lethal gamma rays, during transportation has become very important. Several products have been introduced in the market utilising scintillation counters for detection of such materials. These include scintillation counters designed for freight terminals, border security, ports, weigh bridge applications, scrap metal yards and contamination monitoring of nuclear waste. There are variants of scintillation counters mounted on pick-up trucks and helicopters for rapid response in case of a security situation due to dirty bombs or radioactive waste.  Hand-held units are also commonly used.
Scintillation counter as a spectrometer
Scintillators often convert a single photon of high energy radiation into high number of lower-energy photons, where the number of photons per mega electron volt of input energy is fairly constant. By measuring the intensity of the flash (the number of the photons produced by the x-ray or gamma photon) it is therefore possible to discern the original photon's energy.
The spectrometer consists of a suitable scintillator crystal, a photomultiplier tube, and a circuit for measuring the height of the pulses produced by the photomultiplier. The pulses are counted and sorted by their height, producing a x-y plot of scintillator flash brightness vs number of the flashes, which approximates the energy spectrum of the incident radiation, with some additional artifacts. A monochromatic gamma radiation produces a photo peak at its energy. The detector also shows response at the lower energies, caused by Compton scattering, two smaller escape peaks at energies 0.511 and 1.022 MeV below the photo peak for the creation of electron-positron pairs when one or both annihilation photons escape, and a backscatter peak. Higher energies can be measured when two or more photons strike the detector almost simultaneously (pile-up, within the time resolution of the data acquisition chain), appearing as sum peaks with energies up to the value of two or more photo peaks added.
****************
5.4.1 Liquid scintillation counting
Liquid scintillation counting is a standard laboratory method in the life-sciences for measuring radiation from beta-emitting nuclides. Scintillating materials are also used in differently constructed "counters" in many other fields.
 Samples are dissolved or suspended in a "cocktail" containing an aromatic solvent (historically benzene or toluene, but more recently less hazardous solvents have come into favour) and small amounts of other additives known as fluors, i.e., scintillants or scintillators. Beta particles emitted from the sample transfer energy to the solvent molecules, which in turn transfer their energy to the fluors; the excited fluor molecules dissipate the energy by emitting light. In this way, each beta emission (ideally) results in a pulse of light. Scintillation cocktails often contain additives that shift the wavelength of the emitted light to make it more easily detected.
The samples are placed in small transparent or translucent (often glass or plastic) vials that are loaded into an instrument known as a liquid scintillation counter. The counter has two photomultiplier tubes connected in a coincidence circuit. The coincidence circuit assures that genuine light pulses, which reach both photomultiplier tubes, are counted, while spurious pulses (due to line noise, for example), which would only affect one of the tubes, are ignored.

Counting efficiencies under ideal condition ranges from about 30% for tritium (a low-energy beta emitter) to nearly 100% for phosphorus-32, a high-energy beta emitter. Some chemical compounds (notably chlorine compounds) and highly colored samples can interfere with the counting process. This interference, known as "quenching", can be overcome through data correction or through careful sample preparation.
High-energy beta emitters such as P-32 can also be counted in a scintillation counter without the cocktail. This technique, known as Cherenkov counting, relies on the Cherenkov radiation being detected directly by the photomultiplier tubes. Cherenkov counting in this experimental context is normally used for quick rough measurements, since it is more liable to variation caused by the geometry of the sample.
***************** 

















അഭിപ്രായങ്ങളൊന്നുമില്ല:

ഒരു അഭിപ്രായം പോസ്റ്റ് ചെയ്യൂ