Friday, April 20, 2007

Bjerrum length

The Bjerrum length is the separation at which the electrostatic interaction between two elementary charges is comparable in magnitude to the thermal energy scale, kBT, where kB is the Boltzmann constant and T is the absolute temperature in Kelvin.

In standard units, the Bjerrum length is given by

\lambda_B = \frac{e^2}{4\pi \epsilon \ k_B T},

where e is the elementary charge, ε is the dielectric constant of the medium and ε0 is the vacuum permittivity. For water at room temperature (T = 300 K), \epsilon \approx 80 \, \epsilon_0, so that \lambda_B \approx 0.7 nm.

In Gaussian units, 4πε0 = 1 and the Bjerrum length has the simpler form

\lambda_B = \frac{e^2}{\epsilon_r k_B T},

where εr = ε / ε0 is the relative dielectric constant.

DLVO theory

The DLVO theory is named after Derjaguin, Landau, Verwey and Overbeek who developed it in the 1940s.

The theory describes the force between charged surfaces interacting through a liquid medium. It combines the effects of the van der Waals attraction and the electrostatic repulsion due to the so called double-layer of counterions.

The electrostatic part of the DLVO interaction is computed in the mean field approximation. For two spheres of radius a with constant surface charge Z separated by a center-to-center distance r in a fluid of dielectric constant ε containing a concentration n of monovalent ions, the electrostatic potential takes the form of a screened-Coulomb or Yukawa repulsion,

\beta U(r) = Z^2 \lambda_B \, \left(\frac{\exp(\kappa a)}{1 + \kappa a}\right)^2 \, \frac{\exp(-\kappa r)}{r},

where λB is the Bjerrum length, κ − 1 is the Debye-Hückel screening length, which is given by κ2 = 4πλBn, and β − 1 = kBT is the thermal energy scale at absolute temperature T.

Tuesday, April 17, 2007

Piranha solution

Piranha cleaning solution, also known as piranha etch, is a warm mixture of sulphuric acid (H2SO4) and hydrogen peroxide (H2O2), used to clean organic residue off of substrates. Because the mixture is a strong oxidizer, it will remove most organic matter, and it will also hydroxylate most surfaces (add OH groups), making them extremely hydrophilic (water compatible).

Many different mixture ratios are commonly used, and are all called piranha. A typical mixture is 3:1 concentrated sulphuric acid to hydrogen peroxide solution (such as a 30% hydrogen peroxide stock solution). Other protocols may use a 4:1 or even 7:1 mixture. A closely related mixture, sometimes called "base piranha" is a 3:1 mixture of ammonium hydroxide (NH4OH) with hydrogen peroxide.

Use rubber gloves and a face shield. Perform the reaction in a fume hood or outdoors. To create the piranha bath, one typically starts with a bath of sulphuric acid, to which the peroxide is carefully to be added. One must always add the peroxide to the acid, not the other way around. The mixture reaction is exothermic, hence the solution will become hot. Once the mixture has stabilized, it can be further heated to sustain its reactivity.

Electron beam lithography

The practice of using a beam of electrons to generate patterns on a surface is known as Electron beam lithography. The primary advantage of this technique is that it is one of the ways to beat the diffraction limit of light and make features in the sub-micrometre regime. Beam widths may be on the order of nanometers as of the year 2005. This form of lithography has found wide usage in research, but has yet to become a standard technique in industry. The main reason for this is speed. The beam must be scanned across the surface to be patterned -- pattern generation is serial. This makes for very slow pattern generation compared with a parallel technique like photolithography (the current standard) in which the entire surface is patterned at once. As an example, to pattern a single layer of semiconductor containing 60 devices (each device consists of many layers) it would take an electron beam system approximately two hours; compared with less than two minutes for an optical system.
One caveat: While electron beam lithography is used directly in industry for writing features, the process is used mainly to generate exposure masks to be used with conventional photolithography. However, when it is more cost-effective to avoid the use of masks, e.g., low volume production or prototyping, electron-beam direct writing is also used.

For commercial applications, electron beam lithography is usually produced using dedicated beam writing systems that are very expensive (>$2M USD). For research applications, it is very common to produce electron beam lithography using an electron microscope with a home-made or relatively low cost lithography accessory. Such systems have produced linewidths of ~20 nm since at least 1990, while current systems have produced linewidths on the order of 10 nm or smaller. These smallest features have generally been isolated features, as nested features exacerbate the proximity effect, whereby electrons from exposure of an adjacent feature spill over into the exposure of the currently written feature, effectively enlarging its image, and reducing its contrast, i.e., difference between maximum and minimum intensity. Hence, nested feature resolution is harder to control. For most resists, it is difficult to go below 25 nm lines and spaces, and a limit of 20 nm lines and spaces has been found.

With today's electron optics, electron beam widths can routinely go down to a few nm. This is limited mainly by aberrations and space charge. However, the practical resolution limit is determined not by the beam size but by forward scattering in the photoresist and secondary electron travel in the photoresist. The forward scattering can be decreased by using higher energy electrons or thinner photoresist, but the generation of secondary electrons is inevitable. The travel distance of secondary electrons is not a fundamentally derived physical value, but a statistical parameter often determined from many experiments or Monte Carlo simulations down to <>

In addition to secondary electrons, primary electrons from the incident beam with sufficient energy to penetrate the photoresist can be multiply scattered over large distances from underlying films and/or the substrate. This leads to exposure of areas at a significant distance from the desired exposure location. These electrons are called backscattered electrons and have the same effect as long-range flare in optical projection systems. A large enough dose of backscattered electrons can lead to complete removal of photoresist in the desired pattern area.

Sunday, April 15, 2007

Dip Pen Nanolithography

Dip Pen Nanolithography (DPN) is a scanning probe lithography technique where an atomic force microscope tip is used to transfer molecules to a surface via a solvent meniscus. This technique allows surface patterning on scales of under 100 nanometres. DPN is the nanotechnology analog of the dip pen (also called the quill pen), where the tip of an atomic force microscope cantilever acts as a "pen," which is coated with a chemical compound or mixture acting as an "ink," and put in contact with a substrate, the "paper."

DPN enables direct deposition of nanoscale materials onto a substrate in a flexible manner. The vehicle for deposition can include pyramidal scanning probe microscope tips, hollow tips, and even tips on thermally actuated cantilevers. Applications of this technology currently range through chemistry, materials science, and the life sciences, and include such work as ultra high density biological nanoarrays, additive photomask repair, and brand protection for pharmaceuticals.

The technique was developed in 1999 by a research group at Northwestern University led by Chad Mirkin [1]. The company NanoInk, Inc. holds a patent on Dip Pen Nanolithography, and "DPN" and "Dip Pen Nanolithography" are trademarks or registered trademarks of NanoInk.

Top-down and bottom-up design

Top-down and bottom-up are strategies of information processing and knowledge ordering, mostly involving software, and by extension other humanistic and scientific system theories (see systemics).

In a top-down approach an overview of the system is first formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes" that make it easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed enough to realistically validate the model.

In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, whereby the beginnings are small, but eventually grow in complexity and completeness. However, "organic strategies", may result in a tangle of elements and subsystems, developed in isolation, and subject to local optimization as opposed to meeting a global purpose.

Nanotechnology

Top-down and bottom-up are used as two approaches for assembling nanoscale materials and devices. Bottom-up approaches seek to have smaller (usually molecular) components arrange themselves into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally-controlled ones to direct their assembly.

The top-down approach often uses the traditional workshop or microfabrication methods where externally-controlled tools are used to cut, mill and shape materials into the desired shape and order. Bottom-up approaches, in contrast, use the chemical properites of single molecules to cause single-molecule components to automatically arrange themselves into some useful conformation. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry.

Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases.