Is it possible to rearrange atoms
Explain that a water molecule has two hydrogen atoms and one oxygen atom. Ask students to use Snap Cubes to make a water molecule. Then show students the animated Snap Cube model of a water molecule. Plants need carbon dioxide to live. Carbon dioxide is also in the bubbles in soda pop. Use the animation to show students a ball model of a carbon dioxide molecule CO 2 and post the carbon dioxide molecule card.
Explain that a carbon dioxide molecule has 1 carbon atom and 2 oxygen atoms. Ask students to use Snap Cubes to make a carbon dioxide molecule. Then show students the animated Snap Cube model of a carbon dioxide molecule. You need to be careful with these kinds of cleaning solutions because they could hurt your skin and eyes.
Use the animation to show students a ball model of an ammonia molecule NH 3 and post the ammonia molecule card. Explain that a molecule of ammonia has 1 nitrogen atom and 3 hydrogen atoms. Ask students to use Snap Cubes to make an ammonia molecule. Then show students the animated Snap Cube model of an ammonia molecule. Use the animation to show students a ball model of a methane molecule CH 4 and post the methane molecule card.
Explain that a molecule of methane has 1 carbon atom and 4 hydrogen atoms. Ask students to use Snap Cubes to make a methane molecule. Then show students the animated Snap Cube model of a methane molecule.
Use the animation to show students a ball model of a hydrogen peroxide molecule H 2 O 2 and post the hydrogen peroxide molecule card. Explain that a molecule of hydrogen peroxide has 2 hydrogen atoms and 2 oxygen atoms. Ask students to use the Snap Cubes to make a hydrogen peroxide molecule. Rearrange the atoms in soil, water, and air, and you have grass.
And since humans first made stone tools and flint knives, we have been manipulating atoms in great thundering statistical herds by casting, milling, grinding, and chipping materials.
We rearrange the atoms in sand, for example, add a pinch of impurities, and we produce computer chips. We have gotten better and better at it, and can make more things at lower cost and with greater precision than ever before.
Even in our most precise work, we move atoms around in massive heaps and untidy piles-millions or billions of them at a time. Theoretical analyses make it clear, however, that we should be able to rearrange atoms and molecules one by one-with every atom in just the right place-much as we might arrange Lego blocks to create a model building or simple machine.
This technology, often called nanotechnology or molecular manufacturing, will allow us to make most products lighter, stronger, smarter, cheaper, cleaner, and more precise.
The consequences would be great. We could, for starters, continue the revolution in computer hardware right down to molecular-sized switches and wires. The ability to build things molecule by molecule would also let us make a new class of structural materials that would be more than 50 times stronger than steel of the same weight: a Cadillac might weigh pounds; a full-size sofa could be picked up with one hand.
The ability to build molecule by molecule could also give us surgical instruments of such precision and deftness that they could operate on the cells and even molecules from which we are made. The ability to make such products probably lies a few decades away. But theoretical and computational models provide assurances that the molecular manufacturing systems needed for the task are possible-that they do not violate existing physical law.
These models also give us a feel for what a molecular manufacturing system might look like. This is an important foundation: after all, the basic idea of an electrical relay was known in the s, and the concept of a mechanical computer that operated off a stored set of instructions-a program-was understood a few years later.
Today, scientists are devising numerous tools and techniques that will be needed to transform nanotechnology from computer models into reality. While most remain in the realm of theory, there appears to be no fundamental barrier to their development.
Imagine putting some wires, transistors, and other electronic components into a bag, shaking it, and pulling out a radio-fully assembled and ready to work. Mixing solutions in a beaker, a chemist lets the intrinsic attractions and repulsions of certain molecules and atoms take over. An art and science has evolved to arrange conditions so that atoms spontaneously assemble into particular molecular structures. Similarly, we are surrounded and inspired by products that are marvelously complex and yet very inexpensive.
Potatoes, for example, consist of tens of thousands of genes and proteins and intricate molecular machinery; yet we think nothing of eating this miracle of biology, mashed with a little butter. Potatoes, along with many other agricultural products, cost less than a dollar a pound.
The key reason: if provided with a little soil, water, air, and sunlight, a potato can make more potatoes. Likewise, if we could make a general-purpose programmable manufacturing device that was able to make copies of itself-what nanotechnology researchers call an assembler-then the manufacturing costs for both the device and anything it made could be kept low.
This bigger part can combine in the same way with other parts so that a complex whole emerges from molecular pieces. Self-assembly is not by itself sufficient, however, to make the wide range of products that nanotechnology promises. If the parts are indiscriminately sticky, for example, then stirring them together would yield messy blobs instead of precise molecular machines. We can solve this problem by holding the molecular parts in the proper position and orientation so that when they touch they will join together the way we want them to.
At the macroscopic scale, the idea that we can hold parts in our hands and assemble them by properly positioning them with respect to each other goes back to prehistory: we celebrate ourselves as the tool-using species. But the idea of holding and positioning molecules is new and almost shocking. Current proposals for molecular-scale positional devices resemble normal-sized robotic devices, but they are about one ten-millionth as big.
A molecular robotic arm could sweep systematically back and forth, adding and withdrawing atoms from a surface to build any structure that the computer instructed it to. Such an arm, composed of a few million atoms, might be nanometers long and 30 nanometers around. Although it would have roughly moving parts, it would use no lubricants-at this scale, a lubricant molecule is more like a piece of grit.
Such ultraminiature tools should be able to position their tips to within a small fraction of an atomic diameter. Trillions of such devices would occupy little more than a few cubic millimeters a speck slightly larger than a pinhead.
Atoms and molecules are in a constant state of wiggle and jiggle; the higher the temperature, the more vigorous the motion. To maintain its position, therefore, a nanoscale arm must be extremely stiff. The stiffest material around is diamond.
The strength and lightness of a material depends on the number and strength of the bonds that hold its atoms together, and on the lightness of the atoms. The element that best fits both criteria is carbon, which is lightweight and forms stronger bonds than any other atom. The carbon-carbon bond is especially strong; each carbon atom can bond to four neighboring atoms.
In diamond, then, a dense network of strong bonds creates a strong, light, and stiff material. Indeed, just as we named the Stone Age, the Bronze Age, and the Steel Age after the materials that humans could make, we might call the new technological epoch we are entering the Diamond Age. How can a diamond device of this scale be produced? One answer comes from looking at how we grow diamond today.
In a process somewhat reminiscent of spray painting, we build up layer after layer of diamond by holding a surface in a cloud of reactive hydrogen atoms and hydrocarbon molecules. When these molecules bump into the surface they change it, either by adding, removing, or rearranging atoms. By carefully controlling the pressure, temperature, and the exact composition of the gas in this process, called chemical vapor deposition CVD , we can create conditions that favor the growth of diamond on the surface.
That is the basic, basic question of any sci-fi gadget: either getting the energy necessary or dissipating the waste energy produced. Fission and fusion of atoms both involve a slight change of mass which in turn releases or requires a tremendous amount of energy , depends on what's being fizzed or fused. These numbers add up multiply up fast.
Fission and fusion don't always produce energy. They only do so when going from a less stable configuration to a more stable configuration. For example, helium is more stable than two hydrogen atoms, so solar fusion produces energy. This is a particularly interesting example. You have chosen two of the most stable atoms in the universe. That means you'll have to pump in a lot of energy to change them. Iron is the last element produced in stellar fusion just before the star goes supernova; iron is so stable even a super giant star cannot fuse iron into something else.
For that you need a supernova. Lead is also extremely stable. Unlike iron, it's the final decay product for many radioactive isotopes. For example, if you start with a hunk of Uranium that will decay naturally in about 4 billion years to Thorium After about a month that decays to Protactinium for about a minute then you get Uranium for another , years or so.
It spends about 70, years bumming around as Thorium, fools around as Radium for years, spends a quick 4 day holiday as Radon , and finally bounces around between various isotopes of Polonium, Bismuth and Lead before finally landing on Lead forever.
Those atoms are all unstable, except the last one Lead Each step of that reaction, each time the atom spontaneously fizzes or fuses, that releases energy. Because lead and iron are so stable, transforming them into anything else will soak up a tremendous amount of energy. Fusing iron or larger is a net loss of energy. You can't just go in with a pair of very tiny magical tweezers and pull atoms apart. Atoms are extremely tightly bound together.
Fission is knocking a particle away from the nucleus. The nucleus is held together by the strong force which is, you guessed it, very, very strong, but only at very short ranges. It's strong enough to keep all those positively charged protons, which want to fly apart because of the electromagnetic force , tightly packed together. Fission must overcome the strong force with a lot of energy, like a whizzing neutron smacking into the nucleus, plus some luck with quantum tunneling to jump that last gap.
This can jostle the nucleus enough such that a particle gets far enough away for electromagnetism to overcome the strong force and it goes whizzing off. You can't just pull a bunch of protons and neutrons off lead until you get iron.
Fission decay chains are pretty well defined and will do what they want. You'll need to find a chain that results in iron, or introduce more magic. This dance, called dynamic voltage and frequency scaling DVFS , happens continually in the processor, called a system-on-chip SoC , that runs your phone and your laptop as well as in the servers that back them. It's all done in an effort to balance computational performance with power consumption, something that's particularly challenging for smartphones.
The circuits that orchestrate DVFS strive to ensure a steady clock and a rock-solid voltage level despite the surges in current, but they are also among the most backbreaking to design. That's mainly because the clock-generation and voltage-regulation circuits are analog, unlike almost everything else on your smartphone SoC. We've grown accustomed to a near-yearly introduction of new processors with substantially more computational power, thanks to advances in semiconductor manufacturing.
The analog components that enable DVFS, especially a circuit called a low-dropout voltage regulator LDO , don't scale down like digital circuits do and must basically be redesigned from scratch with every new generation. If we could instead build LDOs—and perhaps other analog circuits—from digital components, they would be much less difficult to port than any other part of the processor, saving significant design cost and freeing up engineers for other problems that cutting-edge chip design has in store.
What's more, the resulting digital LDOs could be much smaller than their analog counterparts and perform better in certain ways. Research groups in industry and academia have tested at least a dozen designs over the past few years, and despite some shortcomings, a commercially useful digital LDO may soon be in reach.
Low-dropout voltage regulators LDOs allow multiple processor cores on the same input voltage rail V IN to operate at different voltages according to their workloads. In this case, Core 1 has the highest performance requirement. Its head switch, really a group of transistors connected in parallel, is closed, bypassing the LDO and directly connecting Core 1 to V IN , which is supplied by an external power management IC. Cores 2 through 4, however, have less demanding workloads.
Their LDOs are engaged to supply the cores with voltages that will save power. The basic analog low-dropout voltage regulator [left] controls voltage through a feedback loop.
In the basic digital design [right], an independent clock triggers a comparator [triangle] that compares the reference voltage to V DD. The result tells control logic how many power PFETs to activate. On a single sliver of silicon it integrates multiple CPU cores, a graphics processing unit, a digital signal processor, a neural processing unit, an image signal processor, as well as a modem and other specialized blocks of logic.
Naturally, boosting the clock frequency that drives these logic blocks increases the rate at which they get their work done. But to operate at a higher frequency, they also need a higher voltage.
Without that, transistors can't switch on or off before the next tick of the processor clock. Of course, a higher frequency and voltage comes at the cost of power consumption. So these cores and logic units dynamically change their clock frequencies and supply voltages—often ranging from 0. These voltages are delivered to areas of the SoC chip along wide interconnects called rails. But the number of connections between the power-management chip and the SoC is limited.
But they don't have to all get the same voltage, thanks to the low-dropout voltage regulators. LDOs along with dedicated clock generators allow each core on a shared rail to operate at a unique supply voltage and clock frequency. The core requiring the highest supply voltage determines the shared V IN value.
The power-management chip sets V IN to this value and this core bypasses the LDO altogether through transistors called head switches.
To keep power consumption to a minimum, other cores can operate at a lower supply voltage. Software determines what this voltage should be, and analog LDOs do a pretty good job of supplying it. They are compact, low cost to build, and relatively simple to integrate on a chip, as they do not require large inductors or capacitors. But these LDOs can operate only in a particular window of voltage. For example, if the supply voltage that would be most efficient for the core is 0.
Similarly, if V IN has already been set below a certain voltage limit, the LDO's analog components won't work properly and the circuit can't be engaged to reduce the core supply voltage further. The main obstacle that has limited use of digital LDOs so far is the slow transient response.
However, if the desired voltage falls inside the LDO's window, software enables the circuit and activates a reference voltage equal to the target supply voltage. In the basic analog LDO design, it's by means of an operational amplifier, feedback, and a specialized power p -channel field effect transistor PFET. The latter is a transistor that reduces its current with increasing voltage to its gate.
The op amp continuously compares the circuit's output voltage—the core's supply voltage, or V DD —to the target reference voltage. If the LDO's output voltage falls below the reference voltage—as it would when newly active logic suddenly demands more current—the op amp reduces the power PFET's gate voltage, increasing current and lifting V DD toward the reference voltage value.