The Industrial Physicis
Loading
past issues contact us reprints TIP home

American Institute of Physics

 

 

Letters

Auto fuels

Your August/September correspondent, Carl G. Cash (page 7, letter 6), mentions a “Kipp generator” that, onboard a car, could supply hydrogen fuel while consuming metal and acid. A metal that can do that, for example, aluminum, can more effectively propel a car if it is burned directly and hydrogen is not involved. One of the gains is efficiency—all the heat of the metal’s oxidation is available to the engine. Another is that the vessel for holding the metal after oxidation can be smaller, since it can hold dry oxide rather than wet.

Perhaps it won’t be easy to develop a motor that can ingest aluminum, burn it, and excrete corundum pellets; handling mechanisms that are new to the auto trade are needed at both ends. However, the result would be more desirable than hydrogen cars historically have been. Aluminum is relatively light and compact. Recent liquid-hydrogen BMWs have included hydrogen tanks that weighed 155 kg when full. A corresponding aluminum-fuel bin when “empty,” that is, when all its aluminum is in the form of oxide, would weigh only 97 kg. The contents are heavier, but by fitting in less than half the space, and being less hazardous, they allow a savings of containment mass that more than compensates.

Graham R. L. Cowan
Cobourg, Ontario, Canada

[Frederick E. Pinkerton replies: An alternative fuel for transportation must generate large quantities of energy onboard; it must also be economically and thermodynamically viable. Building a sustainable energy future favors recyclable fuels, and this energy cost must be built into the analysis, “well-towheels- to-well.” Back-of-the-envelope estimates can often be done using information available on the Internet. Comparing the efficiencies of aluminum oxide refining and water electrolysis, for example, shows that recycling aluminum fuel for a combustion engine is about 2.5–3 times as energy-intensive as recycling hydrogen fuel for a fuel-cell engine. Moreover, direct storage of hydrogen as a liquid, as compressed gas, or in reversible solid-storage media is attractive in part because the fuel-cell reaction product is simply exhausted as water vapor; “recycling” hydrogen only requires a source of water. Fuels that generate a reaction product, including hydrolysis hydrides for generating hydrogen, require capture, removal, transportation, and reprocessing of the spent material. Uncertainties regarding the economics and energy efficiency of recycling are a significant impediment for such materials.]


Climate sensitivity

Climate sensitivity (CS) is defined as the (equilibrium) global mean temperature increase from a doubling of greenhouse-gas levels. It was first set in 1979 by “hand-waving” [1] at between 1.5 and 4.5 °C and has since appeared—unchanged—in every Assessment Report of the United Nations Intergovernmental Panel on Climate Change (IPCC), from 1990 to 2001. The large range indicates the uncertainty inherent in climate models because of different assumptions, parameterizations, and approximations used in trying to simulate complicated atmospheric processes. To decide whether anthropogenic climate change is important, it is essential to narrow this range and to validate model results by comparing them with actual observations.

In the IPCC Workshop on Climate Sensitivity held July 26–29, 2004, in Paris, 14 models reported CS values of 2.0 to 5.1 °C [1]. After polling 8 current models, however, Gerald Meehl from the National Center for Atmospheric Research (NCAR) narrowed the range to 2.6 to 4.0, which is remarkably close to that derived from a Massachusetts Institute of Technology (MIT) model [2]. But this apparent agreement does not constitute validation against observations—the only real test. For example, James Murphy et al. (Hadley Centre, U.K.) got a range of 2.4 to 5.4 °C (Nature2004, 430, 768)—using a technique that varied 29 parameters entering into their model. An extension of their method has now narrowed the range somewhat to agree with the new IPCC values [1].

But what is the significance of a consensus among modelers? The assembled group of IPCC modelers ascribed the narrowing of the CS range to a “better understanding of atmospheric processes” [1]. At the same time, however, Jeffrey Kiehl (NCAR) admits [1] that the models “disagree sharply about the physical processes.”

The biggest uncertainty still remains the magnitude of the cloud feedback. For example, while “the NCAR and GFDL models might agree about clouds’ net effects … they assume different mixes of cloud properties.” The GFDL (the Geophysical Fluid Dynamics Laboratory at the National Oceanic and Atmospheric Administration) model shows a three times greater increase in short-wave reflection than the NCAR model. NCAR increases the quantity of low-level clouds, while GFDL decreases it. Much of the United States gets wetter with NCAR but drier with GFDL [1].

The MIT model was not directly compared with others. But some notion of its validity can be gained from the projected (1990 to 2100) temperature increases versus latitude—as shown in Figure 2 [2]. While increases between latitudes 45 °N and 45 °S are a modest 1 to 2 °C (depending on whether a certain emission curtailment policy is applied), the median increase in the higher-latitude regions is projected to be between 4 and 6 °C—or up to 0.5° per decade. In other words, by now we should have seen an increase (since 1990) of 0.7 °C. But Arctic temperatures show a slight cooling trend (since peaking around 1940); the Antarctic shows a strong cooling trend.

We conclude, therefore, that climate models continue to be an unrealistic exercise— of moderate usefulness but, absent validation, entirely unsuited for reliable predictions of future climate change. Alan Robock’s (Rutgers University, New Jersey) claim [1] that “we have gone from handwaving to real understanding” is ludicrous. The claimed convergence of results on climate sensitivity is nothing more than an illusion. Modelers are still unable to handle feedback from clouds and continue to ignore the even more problematic issue of water-vapor feedback [3].

They also resist accepting observational evidence [4, 5]. A climate sensitivity of ~3 °C would imply a current temperature trend at the surface of ~0.3 °C per decade and up to double that in the troposphere (according to IPCC). But satellite microwave radiometers and balloon-borne radiosondes agree on a near absence of tropospheric warming. In addition, since the atmospheric level of total (CO2-equivalent) greenhouse gases has already increased by 50%, one would expect to see a temperature rise since 1940 of about 2 °C—taking into account that the temperature should increase approximately logarithmically with CO2 concentration. The absence of such observed increases suggests a climate sensitivity of perhaps 0.5 °C and certainly not more than 1.0—only about 20–30% of the model “consensus.” Increasing levels of greenhouse gases will lead to some global warming, but its magnitude seems small enough to cause no significant problem.

S. Fred Singer
Science & Environmental Policy Project
Arlington, Virginia

References

  1. Kerr, R. A. Three Degrees of Consensus. Science 2004, 305, 932–934.
  2. Forest, C.; Webster, M.; Reilly, J. Narrowing Uncertainty in Global Climate Change. Ind. Physicist 2004, 4, 20–23.
  3. Lindzen, R. S. Some Coolness Concerning Global Warming. Bull. Am. Meteorol. Soc. 1990, 71, 288–299.
  4. Douglass, D. H.; Pearson, B. D.; Singer, S. F. Altitude dependence of atmospheric temperature trends: Climate models versus observation. Geophys. Res. Lett. 2004, 2004, 2 0 010, 1029, L13208.
  5. Douglass, D. H.; et al. Disparity of tropospheric and surface temperature trends: New evidence. Geophys. Res. Lett. 2004, 2 0 0 4,10, 1029, L13207.

[C. Forest, M. Webster, and J. Reilly reply: Uncertainties in cloud feedbacks and in the role of aerosols are widely recognized as critical in modeling climate change. These uncertainties create particular problems in forecasting the fine details of climate change, such as changes in the regional pattern of rainfall or its intensity. The chaotic nature of weather likely means that there are severe limits to the predictability of climate at finer scales. Our focus, however, is on decision-making under uncertainty. There is a chance that climate sensitivity is low, and that observed changes are the result of natural processes or natural variability, and if this could be known with certainty we should not waste resources shifting away from fossil fuels because of feared climate effects. There is also a chance that natural variability or natural processes have masked warming that we otherwise would have observed, in which case climate sensitivity may be higher than casual examination of recent trends would suggest.

The decision-making question is how to weight these various possibilities. We agree with Singer that “apparent agreement [among current models] does not constitute validation against observations,” and thus deriving an estimate of likely climate sensitivity based on a poll of current models is not very meaningful [2].

However, we do not know what to make of Singer’s claim that our results are “remarkably close” to a range he ascribes to a recent workshop. Our article includes a probability density function (pdf) for climate sensitivity with lower and upper limits of 0.5 ºC and 10.0 ºC, respectively [1]. It is thus possible to cite a range of 2.8 to 3.0 ºC, or 1.0 to 8.0 ºC from this graphic as easily as it is to note that it contains the range of 2.6 to 4.0 ºC cited by Singer. Such ranges, absent information on the quantitative likelihood of the actual value falling within them, have very little content. Casually assigning ranges can lead to conclusions that uncertainty has either narrowed or increased when all that has changed is the likelihood for which one is giving a range [3].

The more important conclusion from our work is that the 2.6 to 4.0 ºC range contains only about half of the area under the pdf, meaning there is a 50% chance that actual climate sensitivity is outside this range. This pdf was derived from observation, by specifically determining climate sensitivities that were consistent with spatio- temporal patterns of temperature change in the atmosphere and ocean over the last half of the past century, given an estimate of natural variability and including uncertainty in other forcings [4]. It is our attempt to make projections that are consistent with observations, where consistency is defined statistically and admits the possibility of multiple climate forcings, each uncertain and operating potentially in different directions (warming or cooling).

References

  1. Forest, C.; Webster, M.; Reilly, J. Narrowing Uncertainty in Global Climate Change. Ind. Physicist 2004, 4, 20–23. 2.
  2. Webster, M.; Forest, C.; et al. Uncertainty analysis of climate change and policy response. Climatic Change 2003, 61 (3), 295–430, especially Fig. 5, p. 314. 3.
  3. Reilly, J.; Stone, P. H.; et al. Uncertainty and climate change assessments. Science2001, 293, 430–433.
  4. Forest, C.; Stone, P.; et al. Quantifying uncertainties in climate system properties with the use of recent climate observations. Science 2002, 295, 113–117.]


Quantum measurement

It seems that the process discussed in “Quantum measurement” (August/September, pp. 8–10, item 2) is correctly described as a measurement process that was extended in such a way that the collapse of the wavefunction lasted more than 100 µs. However, the suggestion that the outcomes were known in advance, in a deterministic manner, does not seem correct. I agree that the system does always end up in the same final state. However, the article notes that the amount of current needed to return the z component of the spin to 0 can be used as a measurement of the change in the z component of the ambient magnetic field. This indicates that the measurement being performed is not a measurement of the final ambient z orientation of the magnetic field. In the Schrödinger’s cat view of the experiment, the measurement being performed is of the initial z orientation of the ambient magnetic field. The initial value of that quantity remains locked up in the quantum mechanical probability distributions along with the cat.

To make a full-blown Schrödinger’s cat contraption, let the vial break and the cat die when and if the total current needed to rotate the orientation exceeds some predefined critical value. Conversely, one could arrange the contraption such that the vial will break at the time the z orientation of the magnetic field is within a critical angle with respect to z = 0. In that case, the cat will definitely die, but there is still a great deal of uncertainty about the amount that a life insurance company should charge for a policy on the cat.

Joseph O. West
Department of Physics
Indiana State University
Terre Haute, Indiana

[JM Geremia replies: The confusion involves the quantity being measured in our experiment. It is the spin angular momentum of the atoms that we detect, not the magnetic field. In fact, quantum mechanics dictates that one cannot “measure” a magnetic field per se; rather, one can only infer the presence of a magnetic field by estimating c-number (non-operator) parameters in a quantum Hamiltonian that couples a measurable quantum system to the field. That is, magnetic fields are detected by observing their influence on a probe to which a quantum measurement can be applied. In this sense, application to magnetometry involves the possibility of using our atomic spin system for precision estimation of an ambient magnetic field. We did not perform such a procedure in the work being discussed here.

I would like to emphasize that our measurement is of the z component of the collective spin angular momentum of a cloud of laser-cooled cesium atoms. We prepare the atomic quantum system into an initial state, one which is not an eigenstate of the measurement we later perform. That is, the initial spin system is an eigenstate of the x component of the spin operator; however, we then measure the z component, which is a complementary observable. When we perform this z-component measurement, the projection postulate of quantum mechanics suggests that we should get a random measurement outcome and a corresponding a posteriori quantum state dictated by the particular outcome. We demonstrate that we can reliably produce the same outcome and a posteriori quantum state with a variance approximately 10 times smaller than that predicted by simple projection. It is the inclusion of a unitary feedback control step performed during the quantum measurement that makes this increased certainty possible. Do not worry. All of quantum mechanics is still intact in this measurement. Our results are simply an experimental confirmation of quantum trajectory theory—a theory of continuous quantum measurement and feedback control that has been around for nearly 20 years. Quantum trajectory theory allows for the possibility that one can perform an experiment like ours without violating the uncertainty principle. The measurement back-action (or uncertainty related to performing a measurement) is simply directed into unobserved conjugate variables (such as the other components of the atomic spin).

The interesting result of our experiment is that these conjugate variables will be disturbed regardless of how the measured outcome is obtained. We demonstrated that one can use this property to advantage by custom tailoring the manner in which the measurement uncertainty is “transferred” between noncommuting observables via a feedback control step. That is, any measurement of the z spin component will increase the uncertainty of the y spin component. We performed a measurement that allowed us to remove some uncertainty in the z outcome by compensating in the y component. This experiment has nothing to do with Schrödinger’s cat. Our experiment steers one away from the ontological quantumstate interpretation that leads to the Schrödinger’s cat paradox in the first place. One can find more information regarding quantum measurement theory via references on quantum trajectory theory, quantum filtering theory, quantum Kalman filtering, and real-time quantum feedback control theory.

JM Geremia
Physics and Control & Dynamical Systems
California Institute of Technology
Pasadena, CA 91125]


Corrections

In the August/September New Products section, page 38, under “High-Speed Camera,” the second sentence, “Reducing the area imaged increases the camera’s resolution,” should read, “Reducing the area imaged further increases the image rate.” Also, the art in the “Micromachining” item on page 40 should have been placed in the “Wafer-Dicing System” item on page 39. These corrections have been made online.

Mail letters to The Editor, The Industrial Physicist, One Physics Ellipse, College Park, MD 20740-3842; fax (301-209-0842); e-mail; or respond from our Web site.

 

  adcalls_sub