CHAPTER 1: QUANTUM MECHANICS
1.2 Planck's Hypothesis
1.3 Origin of Quantum Theory
1.3.1 Atomic & Subatomic Particles
1.4 Classical V/S Quantum Mechanics
1.5 Dual Nature of Matter By De Broglie
1.5.1 Matter Waves: De-Broglie Concept
1.5.2 Wavelength of De-Broglie Waves
1.6 Uncertainty Principle
1.7 Localization and the Wave Function
1.9 Valence Bond Theory
1.9.1 Postulates of Valence Bond Theory
1.9.2 Limitations of Valence Bond Theory
1.10 Introduction to Molecular Orbital Theory
1.10.1 Molecular Orbitals
1.11 Computational Chemistry
CHAPTER 2: BASICS OF THERMODYNAMICS
2.2 Importance and Limitations Of Thermodynamics
2.3 Thermodynamic Terms Definition and Examples for System 15 and Surroundings
2.3.1 Properties of A System
2.3.2 State Variables
2.3.3 Thermodynamic Process
2.3.4 Thermodynamic Equilibrium
2.3.5 Internal Energy
2.3.7 Heat Capacity
2.4 Zeroth Law of Thermodynamics
2.5 First Law of Thermodynamics
2.5.1 Mathematical Expression of First Law of Thermodynamics
2.5.2 Relation Between Cp (Constant Pressure) and Cv (Constant Volume):
2.5.3 Spontaneous Process
188.8.131.52 Criteria of Spontaneity
2.6 Second Law of Thermodynamics
2.7 Third Law of Thermodynamics
CHAPTER 3: LATTICE VIBRATIONS & BAND THEORY OF SOLIDS
3.2 Lattice Vibrations
3.3: Heat Capacities of Solids
3.4 Einstein’s Theory of Heat Capacities
3.5 Debye’s Theory of Heat Capacities
3.6 Energy Band Theory in Solids
3.6.1 Important Energy Bands in Solids
3.7 Motion of Electron in Band Theory
3.8 Motion of Electron in Periodic Field Of Crystal
3.9 Kronig-Penny Model
3.10 Brillion Zones
3.11 Difference Between Conductors, Semiconductors, and Insulators on The Basis of Energy Bands
3.12 Difference B\W Conductor Semiconductor & Insulator
CHAPTER 4: SEMICONDUCTORS AND TUNNELING
4.1.1 Intrinsic Semiconductors
4.1.2 Extrinsic Semiconductors
4.1.3 Doping of Semiconductors
4.1.4 N-Type Semiconductor
4.1.5 P-Type Semiconductor
4.1.6 P-N Junction Semiconductor
4.3 Classical Vs Quantum Tunneling
4.4 Tunneling Diode
4.4.1 Symbol of Tunnel Diode
4.4.2 Electric Current in Tunnel Diode
4.4.3 Tunnel Diode Working
4.4.4 Advantages of Tunnel Diodes
4.4.5 Disadvantages of Tunnel Diodes
4.4.6 Applications of Tunnel Diodes
4.5 Tunnel Junction
4.6 Resonant-Tunneling Diode
CHAPTER 5: COLLOIDAL SYSTEMS
5.1.1 True Solutions
5.3.1 Difference Between Colloids & Crystalloids
5.4 Classification of Colloids
5.4.1 Based on the Nature of Interaction Between Dispersed Phase and Dispersion Medium
5.4.2 Based on Type of Particles of Dispersed Phase
5.4.3 Classification Based on The Affinity of Two Phases
184.108.40.206 Lyophilic Sols
220.127.116.11 Lyophobic Sols
5.5 Characteristics of Colloidal Solutions
5.5.1 Dynamic Properties
5.5.2 Electrical Properties
5.5.3 Optical Properties
5.6.3 Differences Between W/O & O/W
5.7 Characteristics of Emulsions
5.8 Identification of Type Of Emulsion
The book is organized into five chapters. A brief description of each of the chapters follows:
Chapter 1: Gives information about the Planks Hypothesis, Origin of quantum mechanics, Classical v/s Quantum mechanics, experimental and theoretical methods, Dual nature of matter by Debroglie, Uncertainty principle, Localization experiment, Complementarity. Valence bond theory and its applications, Introduction to molecular orbital theory, and computational chemistry.
Chapter 2: Reviews the Thermodynamics Introduction, importance and limitations of thermodynamics, thermodynamic terms definition and examples for: system and surroundings, properties of a system, state variables, processes, thermodynamic equilibrium, internal energy, enthalpy, and heat capacity of a system, Zeroth law of thermodynamics. First law of thermodynamics definition, mathematical expressions, heat capacity Spontaneous process, criteria for spontaneity, Second law of thermodynamics, equivalent forms, entropy and its illustrations, Third law of thermodynamics definition and illustration.
Chapter 3: Provides information about Concept of lattice vibrations and thermal heat capacity, Einstein and Debye theories of molar heat capacity and their limitations. Band Theory of Solids, Origin of bands, band theory of solids, motion of electron in periodic field of crystal, Kronig-Penny model, Brillion zones, concept of holes, distinction between metal, insulator & semi- conductor.
Chapter 4: Provides information about intrinsic semiconductors, doping and extrinsic semiconductors, and simple models for semiconductors, Donor and acceptor levels, p-n junction and rectification, tunnelling and resonant tunnelling. Concept of tunnelling, tunnelling through potential barrier, classical v/s quantum tunnelling, tunnelling junction & tunnelling diode.
Chapter 5: Allow us to learn about Crystalloids and colloids, Classifications of colloids, based on state of aggregation, affinity and natural dispersed phase. Characteristics of colloidal solutions: Dynamic properties Optical properties, Electrical properties. Emulsion: introduction, classification, types of emulsions formed on mixing of two partly or completely insoluble liquids, inter-conversion of dispersed phase and medium, characteristics of emulsions, identification of type of emulsion.
Naveen Kumar J. R.
To My Parents, My friends and all my colleagues
Without whom none of my success would be possible.
CHAPTER 1: QUANTUM MECHANICS
Quantum mechanics is a physical science dealing with the behaviour of matter and energy on the scale of atoms and subatomic particles/waves. It also forms the basis for the contemporary understanding of how huge objects such as stars and galaxies, and cosmological events such as the Big Bang, can be analyzed and explained.
Quantum mechanics is the foundation of several related disciplines including nanotechnology, condensed matter physics, quantum chemistry, structural biology, particle physics, and electronics. The term "quantum mechanics" was first coined by Max Born in 1924.
The acceptance by the general physics community of quantum mechanics is due to its accurate prediction of the physical behaviour of systems, including systems where Newtonian mechanics fails. Even general relativity is limited in ways quantum mechanics is not for describing systems at the atomic scale or smaller, at very low or very high energies, or the lowest temperatures.
Through a century of experimentation and applied science, the quantum mechanical theory has proven to be very successful and practical. The foundations of quantum mechanics date from the early 1800s, but the real beginnings of QM date from the work of Max Planck in 1900. Albert Einstein and Niels Bohr soon made essential contributions to what is now called the "old quantum theory."
However, it was not until 1924 that a complete picture emerged with Louis de Broglie's matter-wave hypothesis and the real importance of quantum mechanics became clear. Some of the most prominent scientists to subsequently contribute in the mid-1920s to what is now called the "new quantum mechanics" or "new physics" were Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, and Erwin Schrödinger.
Later, the field was further expanded with work by Julian Schwinger, SinItiro Tomonaga and Richard Feynman for the development of Quantum Electrodynamics in 1947 and by Murray Gell-Mann in particular for the development of Quantum Chromodynamics. The interference that produces coloured bands on bubbles cannot be explained by a model that depicts light as a particle. It can be explained by a model that depicts it as a wave.
The drawing shows sine waves that resemble waves on the surface of the water is reflected from two surfaces of a film of varying width, but that depiction of the wave nature of light is only a crude analogy. Early researchers differed in their explanations of the fundamental nature of what we now call electromagnetic radiation.
Some maintained that light and other frequencies of electromagnetic radiation are composed of particles, while others asserted that electromagnetic radiation is a wave phenomenon. In classical physics, these ideas are mutually contradictory.
Ever since the early days of QM scientists have acknowledged that neither idea by itself can explain electromagnetic radiation. Despite the success of quantum mechanics, it does have some elements.
1. For example, the behaviour of microscopic objects described in quantum mechanics is very different from our everyday experience, which may provoke some degree of incredulity.
2. Most of classical physics is now recognized to be composed of individual cases of quantum physics theory and relativity theory.
3. Dirac brought relativity theory to bear on quantum physics so that it could adequately deal with events that occur at a substantial fraction of the speed of light.
4. Classical physics, however, also deals with mass attraction (gravity), and no one has yet been able to bring gravity into a unified theory with the relativized quantum theory.
1.2: PLANCK'S HYPOTHESIS
In 1900 Max Planck proposed a formula for the intensity curve which did fit the experimental data quite well. He then set out to find a set of assumptions -- a model -- that would produce his formula. Instead of allowing energy to be continuously distributed among all frequencies, Planck's model required that the energy in the atomic vibrations of frequency f was some integer times a small, minimum, discrete energy,
Emin = hf
Where h is a constant, now known as Planck's constant,
h = 6.626176 x 10-34 J s
Planck's proposal, then requires that all the energy in the atomic vibrations with frequency f can be written as
E = nhf
Where n is an integer, n = 1, 2, 3 . . . No other values of the energy were allowed. The atomic oscillators could not have an energy of (2.73) hf or (5/8) hf. This idea that something -- the energy in this case -- can have only certain discrete values is called quantization. We say that energy is quantized. This is referred to as Planck's quantum hypothesis. "Quantum" means how great or of a fixed size.
Planck did not realize how radical and far-reaching his proposals were. He viewed his strange assumptions as mathematical constructions to provide a formula that fit the experimental data. It was not until later when Einstein used very similar ideas to explain the Photoelectric Effect in 1905, that it was realized that these assumptions described "real Physics" and were much more than mathematical constructions to provide the right formula.
1.3: ORIGIN OF QUANTUM THEORY
1.3.1: Atomic & Subatomic Particles
The notion that the building blocks of matter are invisibly tiny particles called atoms is usually traced back to the Greek philosophers Leucippus of Miletus and Democritus of Abdera in the 5th Century BC. The English chemist John Dalton developed the atomic philosophy of the Greeks into a valid scientific theory in the early years of the 19th Century. His treatise New System of Chemical Philosophy gave compelling phenomenological evidence for the existence of atoms and applied the atomic theory to chemistry, providing a physical picture of how elements combine to form compounds consistent with the laws of definite and multiple proportions. Table 1.1 summarizes some very early measurements (by Sir Humphrey Davy) on the relative proportions of nitrogen and oxygen in three gaseous compounds.
We would now identify these compounds as NO2, NO and N2O, respectively. We see in data such as these a confirmation of Dalton's atomic theory: that compounds consist of atoms of their constituent elements combined in small whole number ratios. The mass ratios in Table 1.1 are, with modern accuracy, 0.438, 0.875 and 1.750.
Abbildung in dieser Leseprobe nicht enthalten
After over 2000 years of speculation and reasoning from indirect evidence, it is now possible in a sense to actually see individual atoms, as shown for example in Figure 1.1. The word "atom" comes from the Greek atomos, meaning literally "indivisible." It became evident in the late 19th Century, however, that the atom was not indeed the ultimate particle of matter. Michael Faraday's work had suggested the electrical nature of matter and the existence of subatomic particles. This became manifest with the discovery of radioactive decay by Henri Becquerel in 1896 the emission of alpha, beta and gamma particles from atoms. In 1897, J. J. Thompson identified the electron as a universal constituent of all atoms and showed that it carried a negative electrical charge, now designated -e.
Abbildung in dieser Leseprobe nicht enthalten
Figure1.1 : Image is showing electron clouds of individual xenon atoms on a nickel ( 110) surface produced by a scanning tunnelling microscope at IBM Laboratories.
To probe the interior of the atom, Ernest Rutherford in 1911 bombarded a thin sheet of gold with a stream of positively-charged alpha particles emitted by a radioactive source. Most of the high-energy alpha particles passed right through the gold foil, but a small number were strongly detected in a way that indicated the presence a small but massive positive charge in the centre of the atom (Figure 1.2). Rutherford proposed the nuclear model of the atom. As we now understand it, an electrically-neutral atom of atomic number Z consists of a nucleus of positive charge +Ze, containing almost the entire the mass of the atom, surrounded by Z electrons of minimal mass, each carrying a charge -e. The simplest atom is hydrogen, with Z = 1, consisting of a single electron outside a single proton of charge +e.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1.2: A Summary of Rutherford’s Experiments (a) A representation of the apparatus Rutherford used to detect deflections in a stream of α particles aimed at a thin gold foil target. The particles were produced by a sample of radium. (b) If Thomson’s model of the atom were correct, α particles should have passed straight through the gold foil. (c) But a small number of α particles were deflected in various directions, including right back at the source. This could be true only if the positive charge were much more massive than the α particle. It suggested that the mass of the gold atom is concentrated in a minimal region of space, which he called the nucleus.
With the discovery of the neutron by Chadwick in 1932, the structure of the atomic nucleus was clarified. A nucleus of atomic number Z and mass number A was composed of Z protons and A-Z neutrons. Nuclei diameters are of the order of several times 10-15m. From the perspective of an atom, which is 105 times larger, a nucleus behaves, for most purposes, like a point charge +Ze.
During the 1960s, compelling evidence began to emerge that protons and neutrons themselves had composite structures, with significant contributions by Murray Gell-Mann. According to the currently accepted "Standard Model," the protons and neutron are each made of three quarks, with compositions uud and udd, respectively. The up quark u has a charge of +23e, while the down quark d has a charge of −13e. Despite heroic experimental efforts, individual quarks have never been isolated, evidently placing them in the same category with magnetic monopoles. By contrast, the electron maintains its status as an indivisible elementary particle.
1.4: CLASSICAL V/S QUANTUM MECHANICS
In brief, the main difference between quantum and classical physics is the difference between a ramp and a staircase.
In classical mechanics, events (in general) are continuous, which is to say they move in smooth, orderly and predictable patterns. Projectile motion is an excellent example of classical mechanics. Or the colours or the rainbow, where frequencies progress continuously from red through violet. Events, in other words, proceed incrementally up against a ramp.
In quantum mechanics, events (in particular) are unpredictable, which is to say "jumps" occur that involve seemingly random transitions between states: hence the term "quantum leaps". Moreover, a quantum leap is an all or nothing proposition, sort of like jumping from the roof of one building onto another. You either make it or you break it! Events in the quantum world, in other words, jump from one stair to the next and are seemingly discontinuous
Electrons, for example, the transition between energy levels in an atom by making quantum leaps from one level to the next. This is seen in the emission spectra, where various colours, indicative of energy level transitions made by electrons, are separated by dark areas. The dark areas represent the area through which electrons make quantum -- and therefore dis-continuous leaps between energy levels. There are many other differences between quantum and classical mechanics involving, for example, explanations of the so-called "ultraviolet catastrophe", but these are too technical to discuss in detail here.
Let me just say the final difference between classical and quantum mechanics is the quantum notion of the "complementary nature of light", which states that light is BOTH a particle, which has mass and a wave, which has none. This seemingly contradictory concept shows how weird quantum physics can be when compared to classical physics.
1.5: DUAL NATURE OF MATTER BY DE BROGLIE
1.5.1: Matter Waves: De-Broglie Concept
In 1924, Lewis de-Broglie proposed that matter has dual characteristic just like radiation. His concept about the dual nature of matter was based on the following observations:-
1. The whole universe is composed of matter and electromagnetic radiations. Since both are forms of energy so can be transformed into each other.
2. The matter loves symmetry. As the radiation has dual nature, the matter should also possess dual character.
According to the de Broglie concept of matter waves, the matter has dual nature. It means when the matter is moving it shows the wave properties (like interference, diffraction etc.) are associated with it, and when it is in the state of rest then it shows particle properties. Thus the matter has dual nature. The waves associated with moving particles are matter waves or de-Broglie waves.
1.5.2: Wavelength of De-Broglie Waves
Consider a photon whose energy is given by
E = hυ = hc/λ – – (1)
If a photon possesses mass (rest mass is zero), then according to the theory of relatively, its energy is given by
E = mc2 – – (2)
From (1) and (2), we have
Mass of photon m
Therefore Momentum of a photon
m = h/cλ
P = mc
= h/λ – – (3)
λ = h/p
If instead of a photon, we consider a material particle of mass m is moving with velocity then the momentum of the particle, p=mv. Therefore, the wavelength of the wave associated with this moving particle is given by:
λ = h/p (But here p = mv) – – (4)
This wavelength is called DE-Broglie wavelength.
1.6: UNCERTAINTY PRINCIPLE
The uncertainty principle, also called Heisenberg uncertainty principle or indeterminacy principle, statement, articulated (1927) by the German physicist Werner Heisenberg, that the position and the velocity of an object cannot both be measured precisely, at the same time, even in theory. The very concepts of exact position and exact velocity together, in fact, have no meaning in nature.
Ordinary experience provides no clue of this principle. It is easy to measure both the position and the velocity of, say, an automobile because the uncertainties implied by this principle for ordinary objects are too small to be observed. The complete rule stipulates that the product of the uncertainties in position and velocity is equal to or greater than a tiny physical quantity, or constant (h/(4π), where h is Planck’s constant, or about 6.6×10−34 joule-second). Only for the exceedingly small masses of atoms and subatomic particles does the product of the uncertainties become significant.
Any attempt to measure precisely the velocity of a subatomic particle, such as an electron, will knock it about in an unpredictable way so that a simultaneous measurement of its position has no validity. This result has nothing to do with inadequacies in the measuring instruments, the technique, or the observer; it arises out of the intimate connection in nature between particles and waves in the realm of subatomic dimensions.
Every particle has a wave associated with it; each particle actually exhibits wavelike behaviour. The particle is most likely to be found in those places where the undulations of the wave are highest, or most intense. The more intense the undulations of the associated wave become; however, the more ill-defined becomes the wavelength, which in turn determines the momentum of the particle. So a strictly localized wave has an indeterminate wavelength; it’s associated particle while having a definite position, has no certain velocity. A particle wave having a distinct wavelength, on the other hand, is spread out; the associated particle, while having a rather precise velocity, may be almost anywhere. A quite accurate measurement of one observable involves a relatively large uncertainty in the measurement of the other.
The uncertainty principle is alternatively expressed in terms of a particle’s momentum and position. The momentum of a particle is equal to the product of its mass times its velocity. Thus, the product of the uncertainties in the momentum and the position of a particle equals h/(4π) or more. The principle applies to other related (conjugate) pairs of observables, such as energy and time: the product of the uncertainty in energy measurement and the uncertainty in the time interval during which the measurement is made also equals h/(4π) or more. The same relation holds, for an unstable atom or nucleus, between the uncertainty in the quantity of energy radiated and the uncertainty in the lifetime of the unstable system as it makes a transition to a more stable state.
1.7: LOCALIZATION AND THE WAVE FUNCTION
The rules of quantum physics imply only one localized grain of film is perceived as exposed by a wave function spread out over many grains.
There is one particle-like property that is not related to group representation theory, that of localization. Speaking classically, the carriers of mass and charge seem to be very small, point-like, with highly localized effects. This particle-like localization occurs in an exciting way in quantum mechanical experiments. To illustrate, suppose we have a single light wave function that goes through a single slit and impinges on a screen covered with film grains. The wave function spreads out after going through the slit and hits many grains of film. But surprisingly, a microscopic search will show that only one of the grains will be exposed by the light. It is as if there were a particle of light, a photon, hidden in the wave function, and it is the single grain hit by the particle that is exposed.
As another example, suppose we have a target proton surrounded by a sphere coated with film grains on the inside. An electron (electron-like wave function) is shot at the proton, and the wave function of the electron of the scattered electron spreads out in all directions, hitting every grain (see Scattering Experiments). But again, a microscopic examination will show that one and only one grain is (perceived as) exposed. As in the case of light, it is as if a particulate electron embedded in the wave function followed a particular trajectory and hit and exposed only one grain.
A second important aspect of quantum mechanics is its principle of complementarity or dialecticism. Is light a particle or a wave? Complementarity is the realization that particle and wave behaviour are mutually exclusive, yet that both are necessary for a complete description of all phenomena.
The different intuitive pictures which we use to describe atomic systems, although entirely adequate for given experiments, are nevertheless mutually exclusive. Thus, for instance, the Bohr atom can be described as a small-scale planetary system, having a central atomic nucleus about which the outer electrons revolve. For other experiments, however, it might be more convenient to imagine that the atomic nucleus is surrounded by a system of stationary waves whose frequency is characteristic of the radiation emanating from the atom. Finally, we can consider the atom chemically each picture is legitimate when used in the right place, but the different pictures are contradictory, and therefore we call them mutually complementary.
1.9: VALENCE BOND THEORY
Heitler and London introduced this theory. This is primarily based on the concepts of atomic orbitals, electronic configuration of elements, the overlapping of atomic orbitals, hybridization of atomic orbitals. The overlapping of atomic orbitals results in the formation of a chemical bond. The electrons are localized in the bond region due to overlapping.
Valence bond theory describes the electronic structure of molecules. The theory says that electrons fill the atomic orbitals of an atom within a molecule. It also states that the nucleus of one atom is attracted to the electrons of another atom. Now, we move on and look at the various postulates of the valence bond theory.
1.9.1: Postulates of Valence Bond Theory
- The overlapping of two half-filled valence orbitals of two different atoms results in the formation of the covalent bond. The overlapping causes the electron density between two bonded atoms to increase. This gives the property of stability to the molecule.
- In case the atomic orbitals possess more than one unpaired electron, more than one bond can be formed and electrons paired in the valence shell cannot take part in such a bond formation.
- A covalent bond is directional. Such a bond is also parallel to the region of overlapping atomic orbitals.
- Based on the pattern of overlapping, there are two types of covalent bonds: sigma bond and a pi bond. The covalent bond formed by sidewise overlapping of atomic orbitals is known as pi bond whereas the bond formed by overlapping of atomic orbital along the inter nucleus axis is known as a sigma bond.
1.9.2: Limitations of Valence Bond Theory
Valence Bond theory is also not perfect. It has its own set of limitations.
- It fails to explain the tetravalency of carbon.
- This theory does not discuss the electrons energies.
- The assumptions are about the electrons being localized to specific locations.
- An essential aspect of the Valence Bond theory is the condition of maximum overlap, which leads to the formation of the strongest possible bonds. This theory is used to explain the covalent bond formation in many molecules.
- For example, in the case of the F2 molecule, the F−F bond is formed by the overlap of pz orbitals of the two F atoms, each containing an unpaired electron. Since the nature of the overlapping orbitals is different in H2 and F2 molecules, the bond strength and bond lengths differ between H2 and F2 molecules.
- In an HF molecule, the covalent bond is formed by the overlap of the 1s orbital of H and the 2pz orbital of F, each containing an unpaired electron. Mutual sharing of electrons between H and F results in a covalent bond in HF.
1.10: INTRODUCTION TO MOLECULAR ORBITAL THEORY
1.10.1: Molecular Orbitals
There is a second dominant theory of chemical bonding whose basic ideas are distinct from those employed in valence bond theory. This alternative approach to the study of the electronic structure of molecules is called molecular orbital theory.
The theory applies the orbital concept, which was found to provide the key to the understanding of the electronic structure of atoms, to molecular systems.
The concept of an orbital, whether it is applied to the study of electrons in atoms or molecules, reduces a many-body problem to the same number of one-body problems. In essence, an orbital is the quantum mechanical description (wave function) of the motion of a single electron moving in the average potential field of the nuclei and of the other electrons which are present in the system. An orbital theory is an approximation because it replaces the instantaneous repulsions between the electrons by some average value.
The difficulty in obtaining an accurate description of an orbital is the difficulty in determining the average potential field of the other electrons. For example, the 2s orbital in the lithium atom is a function which determines the motion of an electron in the potential field of the nucleus and in the average field of the two electrons in the 1s orbital.
However, the 1s orbital is itself determined by the nuclear potential field and by the average potential field exerted by the electron in the 2s orbital. Each orbital is dependent upon and determined by all the other orbitals of the system. To know the form of one orbital, we must know the forms of all of them. This problem has a mathematical solution; the exploitation of this solution has proved to be one of the most powerful and widely used methods to obtain information on the electronic structure of matter.
A molecular orbital differs from the atomic case only in that the orbital must describe the motion of an electron in the field of more than one nucleus, as well as in the average field of the other electrons. A molecular orbital will in general, therefore, encompasses all the nuclei in the molecule, rather than being centred on a single nucleus as in the atomic case. Once the forms and properties of the molecular orbitals are known, the electronic configuration and properties of the molecule are again determined by assigning electrons to the molecular orbitals in the order of increasing energy and in accordance with the Pauli Exclusion Principle.
In valence bond theory, a single electron pair bond between two atoms is described in terms of the overlap of atomic orbitals (or in the mathematical formulation of the theory, the product of atomic orbitals) which are centred on the nuclei joined by the bond. In molecular orbital theory, the bond is described in terms of a single orbital which is determined by the field of both nuclei. The two theories provide only a first approximation to the chemical bond.
1.11: COMPUTATIONAL CHEMISTRY
Computational chemistry describes the use of computer modelling and simulation
– including ab initio approaches based on quantum chemistry, and empirical approaches
– to study the structures and properties of molecules and materials. Computational chemistry is also used to describe the computational techniques aimed at understanding the structure and properties of molecules and materials.
Chemists have been some of the most active and innovative participants in this rapid expansion of computational science. Computational chemistry is merely the application of chemical, mathematical and computing skills to the solution of interesting chemical problems. It uses computers to generate information such as properties of molecules or simulated experimental results. Some conventional computer software used for computational chemistry includes:
1. Gaussian xx, Gaussian 94 currently
Computational chemistry has become a useful way to investigate materials that are too difficult to find or too expensive to purchase. It also helps chemists make predictions before running the actual experiments so that they can be better prepared for making observations. The Schrodinger equation (explained in another section) is the basis for most of the computational chemistry scientist’s use. This is because the Schrodinger equation models the atoms and molecules with mathematics. For instance, you can calculate:
1. Electronic structure determinations
2. Geometry optimizations
3. Frequency calculations
4. Transition structures
5. Protein calculations, i.e. Docking
6. Electron and charge distributions
7. Potential energy surfaces (pes)
8. Rate constants for chemical reactions (kinetics)
9. Thermodynamic calculations- the heat of reactions, an energy of activation
Currently, there are two ways to approach chemistry problems: computational quantum chemistry and non-computational quantum chemistry Computational quantum chemistry is primarily concerned with the numerical computation of molecular electronic structures by ab initio, and semi-empirical techniques and non-computational quantum chemistry deal with the formulation of analytical expressions for the properties of molecules and their reactions.
Scientists mainly use three different methods to make calculations:
1. ab initio, (Latin for "from scratch") a group of methods in which molecular structures can be calculated using nothing but the Schrodinger equation, the values of the fundamental constants and the atomic numbers of the atoms present (Atkins, 1991).
2. Semi-empirical techniques use approximations from empirical (experimental) data to provide the input into the mathematical models.
3. Molecular mechanics use classical physics to explain and interpret the behaviour of atoms and molecules.
Abbildung in dieser Leseprobe nicht enthalten
CHAPTER 2: BASICS OF THERMODYNAMICS
Thermodynamics is the study of the energy, principally heat energy, which accompanies chemical or physical changes. Some chemical reactions release heat energy; they are called exothermic reactions, and they have a negative enthalpy change. Others absorb heat energy and are called endothermic reactions, and they have a positive enthalpy change. But thermodynamics is concerned with more than just heat energy. The change in a level of organization or disorganization of reactants and products as changes take place is described by the entropy change of the process. For example, the conversion of one gram of liquid water to gaseous water is in the direction of increasing disorder, the molecules being much more disorganized as a gas than as a liquid. The increase in disorder is described as an increase in entropy, and the change in entropy is positive.
Whether a chemical reaction or physical change will occur depends on both the enthalpy and entropy of the process, which are quantities that can be calculated from tabulated data. Both terms are combined in the free energy—the third and most crucial thermodynamic term. If the change in free energy is negative, the reaction will proceed to the right; this reaction is called a spontaneous reaction. If the sign is positive, the reaction will not proceed as written; this reaction is nonspontaneous. A great prediction as to whether a reaction will or will not take place can be made using tabulated data to calculate the change in free energy.
2.2: IMPORTANCE AND LIMITATIONS OF THERMODYNAMICS
Thermodynamics is an essential part of physics, chemistry, and engineering. Therefore, it is a critical area of study for those in science and technology. Thermodynamics also finds importance in ecology, energy, and other studies.
In addition, similar concepts (especially those analogous to entropy) are used in information theory/computer science and even social sciences.
1. The first law doesn't predict the direction of energy flow - it requires the second law.
2. Thermodynamics predicts what we observe but doesn't actually explain why it happens; it explains that heat flows from hot to cold but doesn't explain what energy actually is.
3. Thermodynamics is statistically based. It explains what is most probable, but there is still a finite probability of situations that would violate the macroscopic predictions of thermodynamics, even though those violations have such a low probability that they would not be expected to occur anywhere in the universe.
4. Thermodynamics does not account for intelligence - although some attempts have been made to account for the energy expended by decision making. For more discussion on this issue, you may want to look up "Maxwell's Demon".
2.3: THERMODYNAMIC TERMS DEFINITION AND EXAMPLES FOR SYSTEM AND SURROUNDINGS
A thermodynamic system is defined as a definite quantity of matter or a region in space upon which attention is focussed in the analysis of a problem. We may want to study a quantity of matter contained within a closed rigid walled chamber, or we may want to consider something such as gas pipeline through which the matter flows.
The composition of the matter inside the system may be fixed or may change through chemical and nuclear reactions. A system may be arbitrarily defined. It becomes essential when the exchange of energy between the system and everything else outside the system is considered. The judgement on the energetics of this exchange is critical.
Types of systems
Two types of systems can be distinguished. These are referred to, respectively, as closed systems and open systems or control volumes. A closed system or a control mass refers to a fixed quantity of matter, whereas a control volume is a region in space through which mass may flow.
A particular type of closed system that does not interact with its surroundings is called an isolated system. Two types of exchange can occur between the system and its surroundings:
1. Energy exchange (heat or work) and
2. Exchange of matter (movement of molecules across the boundary of the system and surroundings).
Based on the types of exchange, one can define
1. Isolated systems: no exchange of matter and energy
2. Closed systems: no exchange of matter but some exchange of energy
3. Open systems: exchange of both matter and energy
If the boundary does not allow heat (energy) exchange to take place it is called adiabatic boundary; otherwise it is a diathermal boundary.
Everything external to the system is surroundings. The system is distinguished from its surroundings by a specified boundary which may be at rest or in motion. The interactions between a system and its surroundings, which take place across the boundary, play an essential role in thermodynamics. A system and its surroundings together comprise a universe.
2.3.1: Properties of a System
To describe a system and predict its behaviour requires a knowledge of its properties and how those properties are related. Properties are macroscopic characteristics of a system such as mass, volume, energy, pressure and temperature to which numerical values can be assigned at a given time without knowledge of the past history of the system. Many other properties are considered during the course of our study.
The value of a property of a system is independent of the process or the path followed by the system in reaching a particular state. The change in the value of the property depends only on the initial and the final states.
Intensive and Extensive Properties
There are specific properties which depend on the size or extent of the system, and there are specific properties which are independent of the size or extent of the system.
The properties like volume, which depend on the size of the system are called extensive properties. The properties, like temperature and pressure which are independent of the mass of the system, are called intensive properties.
The test for an intensive property is to observe how it is affected when a given system is combined with some fraction of an exact replica of itself to create a new system differing only by size. Intensive properties are those who are unchanged by this process, whereas those properties whose values are increased or decreased in proportion to the enlargement or reduction of the system are called extensive properties.
2.3.2: State Variables
State refers to the energy content of a given system, which is its condition described by its properties. The state is defined by specifying certain variables such as temperature (“T”), pressure (“P”), volume (“V”) and composition (“µ”).
State Variables specifically refer to the change inherent if a reaction proceeds because of a change in state.
Types of state variables
1. Extensive variables
2. Intrinsic variables
Extensive variables are proportional to the quantity of matter (such as volume). Intrinsic variables are independent of a quantity that instead describe the whole system (such as density, temperature, and concentration).
2.3.3: Thermodynamic Process
When the system undergoes a change from one thermodynamic state to final state due change in properties like temperature, pressure, volume etc., the system is said to have undergone the thermodynamic process. Various types of thermodynamic processes are isothermal process, adiabatic process, isochoric process, isobaric process and reversible process. These have been described below:
When the system undergoes a change from one state to the other, but its temperature remains constant, the system is said to have undergone the isothermal process. For instance, in our example of hot water in a thermos flask, if we remove a certain quantity of water from the flask, but keep its temperature constant at 50 degree Celsius, the process is said to be an isothermal process.
The process, during which the heat content of the system or a certain quantity of the matter remains constant, is called an adiabatic process. Thus in the adiabatic process, no transfer of heat between the system and its surroundings takes place. The wall of the system which does not allow the flow of heat through it is called an adiabatic wall, while the wall which allows the flow of heat is called a diathermic wall.
The process, during which the volume of the system remains constant, is called an isochoric process. Heating of gas in a closed cylinder is an example of the isochoric process.
The process during which the pressure of the system remains constant is called an the isobaric process. Example: Suppose there is fuel in piston and cylinder arrangement. When this fuel is burnt the pressure of the gases is generated inside the engine, and as more fuel burns more pressure is created. But if the gases are allowed to expand by allowing the piston to move outside, the pressure of the system can be kept constant.
The constant pressure and volume processes are critical. The Otto and diesel cycle, which are used in the petrol and diesel engine respectively, have a constant volume and constant pressure processes. In practical situations, ideal constant pressure and constant pressure processes cannot be achieved.
In simple words, the process which can be revered back entirely is called a reversible process. This means that the final properties of the system can be entirely reversed back to the original properties. The process can be perfectly reversible only if the changes in the process are infinitesimally small. In practical situations it is not possible to trace these minimal changes in minimal time. Hence the reversible process is also an ideal process. The changes which occur during the reversible process are in equilibrium with each other.
2.3.4: Thermodynamic Equilibrium
Let us suppose that there are two bodies at different temperatures, one hot and one cold. When these two bodies are brought in physical contact with each other, the temperature of both the bodies will change. The hot body will tend to become colder while the cold body will tend to become hotter. Eventually, both the bodies will achieve the same temperatures and they are said to be in thermodynamic equilibrium with each other. In an isolated system when there is no change in the macroscopic property of the system like entropy, internal energy etc., it is said to be in thermodynamic equilibrium. The state of the system which is in thermodynamic equilibrium is determined by intensive properties such as temperature, pressure, volume etc.
Whenever the system is in thermodynamic equilibrium, it tends to remain in this state infinitely and will not change spontaneously. Thus when the system is in thermodynamic equilibrium, there won’t be any spontaneous change in its macroscopic properties.
Conditions for Thermodynamic Equilibrium
The system is said to be in thermodynamic equilibrium if the conditions for the following three equilibrium is satisfied:
1. Mechanical equilibrium
2. Chemical equilibrium
3. Thermal equilibrium
When there are no unbalanced forces within the system and between the system and the surrounding, the system is said to be under mechanical equilibrium. The system is also said to be in mechanical equilibrium when the pressure throughout the system and between the system and surrounding is the same. Whenever some unbalance forces exist within the system, they will get neutralized to attain the condition of equilibrium. Two systems are said to be in mechanical equilibrium with each other when their pressures are the same.
The system is said to be in chemical equilibrium when there are no chemical reactions going on within the system, or there is no transfer of matter from one part of the system to other due to diffusion. Two systems are said to be in chemical equilibrium with each other when their chemical potentials are the same.
When the system is in mechanical and chemical equilibrium, and there is no inevitable change in any of its properties, the system is said to be in thermal equilibrium. When the temperature of the system is uniform and not changing throughout the system and also in the surroundings, the system is said to be thermal equilibrium. Two systems are said to be thermal equilibrium with each other if their temperatures are the same.
For the system to be thermodynamic equilibrium, it is necessary that it should be under mechanical, chemical and thermal equilibrium. If anyone of the above condition is not fulfilled, the system is said to be in non-equilibrium.
2.3.5: Internal Energy
The internal energy of a system is identified with the random, disordered motion of molecules; the total (internal) energy in a system includes potential and kinetic energy. This is a contrast to external energy which is a function of the sample with respect to the outside environment (e.g. kinetic energy if the sample is moving or potential energy if the sample is at a height from the ground etc.). The symbol for Internal Energy Change is ΔU.
Energy on a smaller scale
1. Internal energy includes energy on a microscopic scale
2. It is the sum of all the microscopic energies such as:
- Translational kinetic energy
- Vibrational and rotational kinetic energy
- Potential energy from intermolecular forces
When a process takes place at constant pressure, the heat absorbed or released is equal to the Enthalpy change. Enthalpy is sometimes known as “heat content”, but “enthalpy” is an interesting and unusual word, so most people like to use it. Etymologically, the word “entropy” is derived from the Greek, meaning “turning”, and “enthalpy” is derived from the Greek meaning “warming”. As for pronunciation, Entropy is usually stressed on its first syllable, while enthalpy is usually stressed on the second. Enthalpy (H) is nothing but the sum of the internal energy (U) and the product of pressure (P) and volume (V).
Enthalpy H can be written as,
H = U + pV
H = Enthalpy of the system
U = Internal energy of the system
p = Pressure of the system
V = Volume of the system
Enthalpy is not measured directly; however, the change in enthalpy (ΔH) is measured, which is the heat added or lost by the system. It is entirely dependent on the state functions T, p and U.
2.3.7: Heat Capacity
The heat capacity of an object is the energy transfer by heating per unit temperature change. That is,
Abbildung in dieser Leseprobe nicht enthalten
In this expression, we will frequently put subscripts on C, Cp or Cv, for instance, to denote the conditions under which the heat capacity has been determined.
2.4: ZEROTH LAW OF THERMODYNAMICS
The zeroth law is a consequence of thermal equilibrium and allows us to conclude that temperature is a well-defined physical quantity. The zeroth law of thermodynamics states:
“If a body A and a body B are both in equilibrium with each other; then a body C which is in thermal equilibrium with body B will also be in equilibrium with body A, and the temperature of body C is equal to the temperature of body A”.
It is the zeroth law because it precedes the first and second laws of thermodynamics and is also a tacit assumption in both laws.
We use the zeroth law when we wish to compare the temperatures of two objects, A and B. We can do this by using a thermometer, C and placing it again object A it reaches thermal equilibrium with object A and measure the temperature of A. Placing the thermometer against object B until thermal equilibrium is reached we measure the temperature of object B. If they are the same temperature then they will be in thermal equilibrium with each other.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2.1: The Zeroth law of thermodynamics.
- ISBN (eBook)
- ISBN (Book)
- Catalog Number