Introduction

Power Generation

If you cannot tap your propulsion system for electrical power, you will need a separate power plant (or it's going to be real dark inside your spacecraft).

Typically the percentage of spacecraft dry mass that is power systems is 28% for NASA vessels.

Spacecraft power systems have three subsystems:

  • Power Generation/ Conversion: generating power
  • Energy Storage: storing power for future use
  • Power Management and Distribution (PMAD): routing the power to equipment that needs it

There are a couple of parameters used to rate power plant performance:

  • Alpha : (kg/kW) power plant mass in kilograms divided by kilowatts of power. So if a solar power array had an alpha of 90, and you needed 150 kilowatts of output, the array would mass 90 * 150 = 13,500 kg or 13.5 metric tons
  • Specific Power : (W/kg) watts of power divided by power plant mass in kilograms (i.e., (1 / alpha) * 1000)
  • Specific Energy : (Wh/kg) watt-hours of energy divided by power plant mass in kilograms
  • Energy Density : (Wh/m3) watt-hours of energy divided by power plant volume in cubic meters

NASA has a rather comprehensive report on various spacecraft power systems here . The executive summary states that currently available spacecraft power systems are "heavy, bulky, not efficient enough, and cannot function properly in some extreme environments."

Scroll to see rest of infographic

Energy Harvesting

Energy Harvesting or energy scavenging is a pathetic "waste-not-want-not" strategy when you are desperate to squeeze every milliwatt of power out of your system. This includes waste engine heat (gradients), warm liquids, kinetic motion, vibration, and ambient radiation. This is generally used for such things as enabling power for remote sensors in places where no electricity is readily available.

SPIN POWER

(ed note: the boostship Agamemnon suffers a catastrophic failure to its engines {sabotage}, and the cargo tug Slingshot is chartered to go on a rescue mission. The engineer of the Agamemnon has managed to jury-rig a form of energy harvesting for emergency power.)

His face didn't change. "Experienced cadets, eh? Well, we'd best be down to it. Mr. Haply will show you what we've been able to accomplish." They'd done quite a lot. There was a lot of expensive alloy bar-stock in the cargo, and somehow they'd got a good bit of it forward and used it to brace up the bows of the ship so she could take the thrust. "Haven't been able to weld it properly, though," Haply said. He was a young third engineer, not too long from being a cadet himself. "We don't have enough power to do welding and run the life support too."

Agamemnon's image was a blur on the screen across from my desk. It looked like a gigantic hydra, or a bullwhip with three short lashes standing out from the handle. The three arms rotated slowly. I pointed to it. "Still got spin on her."

"Yes." Ewert-James was grim. "We've been running the ship with that power. Spin her up with attitude jets and take power off the flywheel motor as she slows down."

I was impressed. Spin is usually given by running a big flywheel with an electric motor. Since any motor is a generator, Ewert-James's people had found a novel way to get some auxiliary power for life-support systems. (basically they are converting attitude jet fuel into electricity)


Agamemnon didn't look much like Slingshot. We'd closed to a quarter of a klick, and steadily drew ahead of her; when we were past her, we'd turn over and decelerate, dropping behind so that we could do the whole cycle over again.

Some features were the same, of course. The engines were not much larger than Slingshot's and looked much the same, a big cylinder covered over with tankage and coils, acceleration outports at the aft end. A smaller tube ran from the engines forward, but you couldn't see all of it because big rounded reaction mass canisters covered part of it.

Up forward the arms grew out of another cylinder. They jutted out at equal angles around the hull, three big arms to contain passenger decks and auxiliary systems (the three arms are part of the independent centrifuge). The arms could be folded in between the reaction mass canisters, and would be when we started boosting. All told she was over four hundred meters long, and with the hundred-meter arms thrust out she looked like a monstrous hydra slowly spinning in space (design based on the Pilgrim Observer).


     The fuel transfer was tough. We couldn't just come alongside and winch the stuff over. At first we caught it on the fly: Agamemnon's crew would fling out hundred-ton canisters, then use the attitude jets to boost away from them, not far, but just enough to stand clear.
     Then I caught them with the bow pod. It wasn't easy. You don't need much closing velocity with a hundred tons before you've got a hell of a lot of energy to worry about. Weightless doesn't mean massless.
     We could only transfer about four hundred tons an hour that way. After the first ten-hour stretch I decided it wouldn't work. There were just too many ways for things to go wrong.
     "Get rigged for tow," I told Captain Ewert-James. "Once we're hooked up I can feed you power, so you don't have to do that crazy stunt with the spin. I'll start boost at about a tenth of a centimeter. It'll keep the screens hot, and we can winch the fuel pods down."
     He was ready to agree. I think watching me try to catch those fuel canisters, knowing that if I made a mistake his ship was headed for Saturn and beyond, was giving him ulcers.
     First he spun her hard to build up power, then slowed the spin to nothing. The long arms folded alongside, so that Agamemnon took on a trim shape. Meanwhile I worked around in front of her, turned over and boosted in the direction we were traveling, and turned again.
     The dopplers worked fine for a change. We hardly felt the jolt as Agamemnon settled nose to nose with us. Her crewmen came out to work the clamps and string lines across to carry power. We were linked, and the rest of the trip was nothing but hard work.

From TINKER by Jerry Pournelle (1975)

Fuel Cells

The general term is "chemical power generation", which means power generated by chemical reactions. This is most commonly seen in the form of fuel cells, though occasionally there are applications like the hydrazine-fired gas turbines that the Space Shuttle uses to hydraulically actuate thrust vector vanes.

Fuel cells basically consume hydrogen and oxygen to produce low voltage electricity and water. They are quite popular in NASA manned spacecraft designs. Each PC17C fuel-cell stack in the Shuttle Orbiter has an alpha of about 10 kg/kW, specific power 98 W/kg, have a total mass of 122 kg, have an output of 12 kW, and produces about 2.7 kilowatt-hours per kilogram of hydrogen+oxygen consumed (about 70% efficient). They also have a service life of under 5000 hours. The water output can be used in the life support system.

Different applications will require fuel cells with different optimizations. Some will need high specific power (200 to 400 W/kg), some will need long service life (greater than 10,000 hours), and others will require high efficiency (greater than 80% efficient).

Solar Thermal Power

Back in the 1950's, on artist conceptions of space stations and space craft, one would sometimes see what looked like mirrored troughs. These were "mercury boilers", a crude method of harnessing solar energy in the days before photovoltaics. The troughs had a parabolic cross section and focused the sunlight on tubes that heated streams of mercury. The hot mercury was then used in turbines to generate electricity.

These gradually vanished from artist conceptions and were replaced by nuclear reactors. Generally in the form of a long framework boom sticking out of the hub, with a radiation shadow shield big enough to shadown the wheel.

The technical name is "solar dynamic power", where mirrors concentrate sunlight on a boiler. "Solar static power" is Photovoltaic solar cells.

Such systems are generally useful for power needs between 20 kW and 100 kW. Below 20 kW a solar cell panel is better. Above 100 kW a nuclear fission reactor is better.

They typically have an alpha of 250 to 170, a collector size of 130 to 150 watts per square meter at Terra orbit (i.e., about 11% efficient), and a radiator size of 140 to 200 watts per square meter.

He wasn't surprised when he was assigned to the job of helping paint the solar mirror. This was a big trough that was to run all around the top of the station, set to face the Sun. It was curved to focus the rays of the Sun on a blackened pipe that ran down its center. In the pipe, mercury would be heated into a gas, at a temperature of thirteen hundred degrees Fahrenheit. This would drive a highly efficient "steam" turbine, which would drive a generator for the needed power. When all its energy was used, the mercury would be returned to the outside, to cool in the shadow of the mirror, condensing back to a liquid before re-use.

It was valuable work, and the station badly needed a good supply of power. But painting the mirror was done with liquid sodium. It was a silvery metal that melted easily at a low temperature. On Earth, it was so violently corrosive that it could snatch oxygen out of water. But in a vacuum, it made an excellent reflective paint. The only trouble was that it had to be handled with extreme caution.


It was nasty work. A drop on the plasticized fabric of the space suits would burn a hole through them almost at once. Or a few drops left carelessly on the special gloves they wore for the job could explode violently if carried into the hut, to spread damage and dangerous wounds everywhere nearby.

Jim worked on cautiously, blending his speed with safety in a hard-earned lesson. But the first hour after the new man came out was enough to drive his nerves to the ragged edge. At first, the man began by painting the blackened pipe inside the trough.

Jim explained patiently that the pipe was blackened to absorb heat, and that the silver coating ruined it. He had to go back and construct a seat over the trough on which he could sit without touching the sodium, and then had to remove the metal chemically.

Finally, he gave up. The man was one of those whose intelligence was fine, but who never used it except for purely theoretical problems. He was either so bemused by space or so wrapped up in some inner excitement over being there that he didn't think—he followed orders blindly.


"All right," he said finally. "Go back to Dan and tell him Terrence and I can do it alone. Put your paint in the shop, and mark it dangerous. I'll clean up when I come in."

He watched the man leave, and turned to the boy who had been working with him.


Then suddenly Terrence dropped his brush into the sodium and pointed, his mouth open and working silently.

Jim swung about to see what was causing it, and his own mouth jerked open soundlessly.

The roof of the hut ahead of them was glowing hotly, and as they watched, it suddenly began crumbling away, while a great gout of flame rushed out as the air escaped. Oxygen and heat were fatal to the magnesium alloy out of which the plates were made.


The fire had been coming from the second air lock, installed when the hut was extended. The old one still worked, and men were inside the hut, laboring in space suits. An automatic door had snapped shut between the two sections at the first break in the airtight outer sheathing. But there were still men inside where the flames were, and they were being dragged out of a small emergency lock between the two sections.

One of them yanked off his helmet to cough harshly. His face was burned, but he seemed unaware of it. "Kid came through the lock with a can of something. He tripped, spilled it all over—and then it exploded. We tried to stop it, but it got away. The kid—"

He shuddered, and Jim found that his own body was suddenly weak and shaky. The third man must have done it. He'd taken the orders too literally—he'd gone to report to Dan first, before putting away the sodium. A solid hour's lecture on the dangers of the stuff had meant nothing to him.

From Step to the Stars by Lester Del Rey (1954)

Solar Photovoltaic Power

At Terra's distance to the sun, solar energy is about 1366 watts per square meter. This energy can be converted into electricity by photovoltaics. Of course the power density goes down the farther from the Sun the power array is located.

The technical name is "solar static power", where photovoltaic solar cells convert sunlight into electricity. "Solar dynamic power" is where mirrors concentrate sunlight on a boiler.

Solar power arrays have an alpha ranging from 100 to 1.4 kg/kW. Body-mounted rigid panels an alpha of 16 kg/kW while flexible deployable arrays have an alpha of 10 kg/kW. Most NASA ships use multi-junction solar cells which have an efficiency of 29%, but a few used silicon cells with an efficiency of 15%. Most NASA arrays output from 0.5 to 30 kW.


Some researchers (Dhere, Ghongadi, Pandit, Jahagirdar, Scheiman) have claimed to have achieved 1.4 kg/kW in the lab by using Culn1-×Ga×S2 thin films on titanium foil. Rob Davidoff is of the opinion that a practical design with rigging and everything will be closer to 4 kg/kW, but that is still almost three times better than conventional solar arrays.

In 2015 researchers at Georgia Institute of Technology demonstrated a photovoltaic cell using an optical rectenna. They estimate that such rectennas could have a power conversion efficiency of up to 40% and a lower cost than silicon cells. No word on the alpha, though.


The International Space Station uses 14.5% efficient large-area silicon cells. Each of the Solar Array Wings are 34 m (112 ft) long by 12 m (39 ft) wide, and are capable of generating nearly 32.8 kW of DC power. 19% efficiency is available with gallium arsenide (GaAs) cells, and efficiencies as high as 30% have been demonstrated in the laboratory.

To power a ion drive or other electric propulsion system with solar cells is going to require an array capable of high voltage (300 to 1000 volts), high power (greater than 100 kW), and a low alpha (2 to 1 kg/kW).

Obviously the array works best when oriented face-on to the sun, and unshadowed. As the angle increases the available power decreases in proportion to the cosine of the angle (e.g., if the array was 75° away from face-on, its power output would be Cos(75°) = 0.2588 or 26% of maximum). Solar cells also gradually degrade due to radiation exposure (say, from 8% to 17% power loss over a five year period if the panel is inhabiting the deadly Van Allen radiation belt, much less if it is in free space).

Typically solar power arrays are used to charge batteries (so you have power when in the shadow of a planet). You should have an array output of 20% higher voltage than the battery voltage or the batteries will not reliably charge up. Sometimes the array is used instead to run a regenerative fuel cell.


Solar Power
PlanetSol Dist
(AU)
Power
Factor
Power
(W/m2)
☿ Mercury0.3876.6779,121
Venus0.7231.9132,613
⊕ Terra1.0001.0001,366
Mars1.5200.433591
⚶ Vesta2.3620.179245
⚵ Juno2.6700.140192
⚳ Ceres2.7680.131178
⚴ Pallas2.7720.130178
Start LILT3.0000.111152
♃ Jupiter5.2000.03751
♄ Saturn9.5800.01115
♅ Uranus19.2000.0034
♆ Neptune30.0500.0012

POWER DROP-OFF

Like all non-coherent light, solar energy is subject to the inverse square law. If you double the distance to the light source, the intensity drops by 1/4.

Translation: if you travel farther from the sun than Terra orbit, the solar array will produce less electricity. Contrawise if you travel closer to the sun the array will produce more electricity. This is why some science fiction novels have huge solar energy farms on Mercury; to produce commercial quantities of antimatter, beamed power propulsion networks, and other power-hungry operations.

As a general rule:

Es = 1366 * (1 / Ds2)

where:

  • Es = available solar energy (watts per square meter)
  • Ds = distance from the Sun (astronomical units)
  • 1366 = Solar Constant (watts per square meter)

Remember that you divide distance in meters by 1.496e11 in order to obtain astronomical units. Divide distance in kilometers by 1.496e8 to obtain astronomical units

Example

What is the available solar energy at the orbit of Mars?

Mars orbits the sun at a distance of 2.28e11 meters. That is 2.28e11 / 1.49e11 = 1.52 astronomical units. So the available solar energy is:

  • Es = 1366 * (1 / Ds2)
  • Es = 1366 * (1 / 1.522)
  • Es = 1366 * (1 / 2.31)
  • Es = 1366 * 0.423
  • Es = 591 watts per square meter

This means that the available solar energy around Saturn is a pitiful 15 W/m2. That's available energy, if you tried harvesting it with the 29% efficient ISS solar cell arrays you will be lucky to get 4.4 W/m2. Which is why the Cassini probe used RTGs.

Special high efficiency cells are needed in order to harvest worthwhile amounts of solar energy in low intensity/low temperature conditions (LILT). Which is defined as the solar array located at 3 AU from Sol or farther (i.e., about 150 watts per square meter or less, one-ninth the energy available at Terra's orbit).

Equation Derivation

If you are curious where the "1,366 W/m2" Solar Constant value in the equation came from (for instance, if you want to calculated it for another star), read on. Otherwise skip this section.

It starts with the Stefan–Boltzmann law:

j = σ * T4

where:

j total energy radiated from black body (W/m2)
σ Stefan–Boltzmann constant (5.670367×10−8 W·m−2·K−4)
T thermodynamic temperature (K)

The Sun's thermodynamic temperature is 5,778 K (effective temperature in the photosphere). Doing the math reveals that j = 63,200,617 W/m2.

To calcuate what this is at Terra's orbit (1 Astronomical Unit) we use the Inverse-Square Law. For this purpose the equation is:

P1au = (Dss2 / Dau2) * j

where:

P1au solar power at 1 AU or solar constant (W/m2)
Dss distance from center of sun to sun's surface, the sun's radius (AU)
Dau solar constant distance (AU) = 1 AU for all stars, by definition
j total energy irradiated, from first equation (W/m2)
x2 square of x, that is x * x

The Sun's radius is 696,342 km. Dividing by 1.496e8 tells us the Sun's radius is 0.00465 AU (because the equation wants both distances in AU). Plugging it all into the equation:

P1au = (Dss2 / Dau2) * j
P1au = (0.004652 / 12) * 63,200,617
P1au = (0.0000216225 / 1) * 63,200,617
P1au = 0.0000216225 * 63,200,617
P1au = 1,369 W/m2

which is close enough for government work to 1,366 W/m2.


To calculate this for other stars you will need that star's thermodynamic temperature and radius. If you do not want to do the math, I made a quick table for you.

  1. Refer to the Star Table
  2. Look up the star's Spectral Class (the Sun is a G2 star)
  3. For thermodynamic temperature T, use value for Teeff (G2 is 5770)
  4. For Dss take the value for R (G2 is 1.0) and multiply it by 0.00465 to get star's radius in AU
  5. Calculate P1au using the two equations above

The solar array drop off equation for that star will then be:

Es = P1au * (1 / Ds2)


For example, the star Sirus A is spectral class A0. From the table Teeff is 10,000 K, use that for T. From the table R is 2.7, times 0.00465 means Dss is 0.01257.

Doing the math, J = 567,036,700 and P1au = 89,561. So for Sirus the solar array drop off equation is

Es = 89,561 * (1 / Ds2)

This means that a spacecraft with a solar array orbiting 5 astronomical units from Sirius (orbital radius of Jupiter) could harvest 3,582 watts per square meter, or about 2.6 times as much as it could get in the solar system at Terra orbit.


A more exotic variant on solar cells is the beamed power concept. This is where the spacecraft has a solar cell array, but back at home in orbit around Terra (or Mercury) is a a huge power plant and a huge laser. The laser is fired at the solar cell array, thus energizing it. It is essentially an astronomically long electrical extension cord constructed of laser light. It shares the low mass advantage of a solar powered array. It has an advantage over solar power that the energy per square meter of array can be much larger.

It has the disadvantage that the spacecraft is utterly at the mercy of whoever is currently running the laser battery. It has the further disadvantage of being frowned upon by the military, since they take a dim view of weapons-grade lasers in civilian hands. Unless the military owned the power lasers in the first place.

Radioisotope Thermoelectric Generators

Radioisotope thermoelectric generators (RTG) are slugs of radioisotopes (usually plutonium-238 in the form of plutonium oxide) that heat up due to nuclear decay, and surrounded by thermocouples to turn the heat gradient into electricity (it does NOT turn the heat into electricity, that's why the RTG has heat radiator fins on it.).

There are engineering reasons that currently make it impractical to design an individual RTG that produces more than one kilowatt. However nothing is stopping you from using several RTGs in your power room. Engineers are trying to figure out how to construct a ten kilowatt RTG.

Current NASA RTGs have a useful lifespan of over 30 years.

Currently RTGs have an alpha of about 200 kg/kW (though there is a design on the drawing board that should get about 100 kg/kW). Efficiency is about 6%. The near term goal is to develop an RTG with an alpha of 100 to 60 kg/kW and an efficiency of 15 to 20%.

An RTG based on a Stirling cycle instead of thermionics might be able to reach an efficiency of 35%. Since they would need less Pu-238 for the same electrical output, a Sterling RTG would have only 0.66 the mass of an equivalent thermocouples RTG. However NASA is skittish about Sterling RTGs since unlike conventional ones, Sterlings have moving parts. Which are yet another possible point of failure on prolonged space missions.

Nuclear weapons-grade plutonium-239 cannot be used in RTGs. Non-fissionable plutonium-238 has a half life of 85 years, i.e., the power output will drop to one half after 85 years. To calculate power decay:

P1 = P0 * 0.9919^Y

where:

  • P1 = current power output (watts)
  • P0 = power output when RTG was constructed (watts)
  • Y = years since RTG was constructed.
Example

If a new RTG outputs 470 watts, in 23 years it will output 470 x 0.9919^23 = 470 x 0.83 = 390 watts

Wolfgang Weisselberg points out that this equation just measures the drop in the power output of the slug of plutonium. In the real world, the thermocouples will deteriorate under the constant radioactive bombardment, which will reduce the actual electrical power output even further. Looking at the RTGs on NASA's Voyager space probe, it appears that the thermocouples deteriorate at roughly the same rate as the plutonium.

Plutonium-238 has a specific power of 0.56 watts/gm or 560 watts per kilogram, so in theory all you would need is 470 / 560 = 0.84 kilograms. Alas, the thermoelectric generator which converts the thermal energy to electric energy has an efficiency of only 6%. If the thermoelectric efficiency is 6%, the plutonium RTG has an effective specific power of 560 x 0.06 = 30 watts per kilogram 238Pu (0.033 kilogram 238Pu per watt or 33 kgP/kW). This means you will need an entire 15.5 kilos of plutonium to produce 470 watts.

This is why a Sterling-based RTG with an efficience of 35% is so attractive.

Many RTG fuels would require less than 25 mm of lead shielding to control unwanted radiation. Americium-241 would need about 18 mm worth of lead shielding. And Plutonium-238 needs less than 2.5 mm, and in many cases no shielding is needed as the casing itself is adequate. Plutonium is the radioisotope of choice but it is hard to come by (due to nuclear proliferation fears). Americium is more readily available but lower performance.

At the time of this writing (2014) NASA has a severe Pu-238 problem. NASA only has about 16 kilograms left, you need about 4 kg per RTG, and nobody is making any more. They were purchasing it from the Russian Mayak nuclear industrial complex for $45,000 per ounce, but in 2009 the Russians refused to sell any more.

NASA is "rattled" because they need the Pu-238 for many upcoming missions, they do not have enough on had, and Congressional funding for creating Pu-238 manufacturing have been predictably sporadic and unreliable.

The European Space Agency (ESA) has no access to Pu-238 or RTGs at all. This is why their Philae space probe failed when it could not get solar power. The ESA is accepting the lesser of two evils and is investing in the design and construction of Americium-241 RTGs. Am-241 is expensive, but at least it is available.

Nuclear Fission Reactors

Los Alamos reactor
ComponentMass
Fuel region157 kg
Reflector154 kg
Heat pipes117 kg
Reactor control33 kg
Other support32 kg
Total Reactor mass493 kg

For a great in-depth analysis of nuclear power for space applications, I refer you to Andrew Presby's engineer degree thesis: Thermophotovoltaic Energy Conversion in Space Nuclear Reactor Power Systems . There is a much older document with some interesting designs here .

As far as the nuclear fuel required, the amount is incredibly tiny. Which in this case means burning a microscopic 0.01 grams of nuclear fuel per second to produce a whopping 1000 megawatts! That's the theoretical maximum of course, you can find more details here.

Nuclear fission reactors are about an alpha of 18 kg/kW. However, Los Alamos labs had an amazing one megawatt Heat Pipe reactor that was only 493 kg (alpha of 0.493 kg/kW):

Fission reactors are attractive since they have an incredibly high fuel density, they don't care how far you are from the Sun nor if it is obscured, and they have power output that makes an RTG look like a stale flashlight battery. They are not commonly used by NASA due to the hysterical reaction of US citizens when they hear the "N" word. Off the top of my head the only nuclear powered NASA probe currently in operation is the Curiosity Mars Rover; and that is an RTG, not an actual nuclear reactor.

For a space probe a reactor in the 0.5 to 5 kW power range would be a useful size, 10 to 100 kW is good for surface and robotic missions, and megawatt size is needed for nuclear electric propulsion.

Here is a commentary on figuring the mass of the reactor of a nuclear thermal rocket by somebody who goes by the handle Tremolo:

Now, onto a more practical means for generation 1 MW of power using a Plutonium fission reaction.

To calculate the mass required to obtain a certain power level, we have to know the neutron flux and the fission cross-section. Let's assume the flux is 1E14 neutron/cm2/sec, the cross section for fast fission of Pu-239 is about 2 barns (2E-24 cm2), the energy release per fission is 204 MeV, and the Pu-239 number density is 4.939E22 atoms/cm3. Then the power is

P = flux * number density * cross section * Mev per fission * 1.602E-13 Watt/MeV

P = 1E14 * 4.939E22 * 2E-24 * 204 * 1.602E-13 = 323 W/cm3

So, for 1 MW, we need 1E6/323 = 3100 cm3. Given a density of 19.6 gm/cm3, this is 19.6*3100 = 60,760 gm or 60.76 kg.

The next question to ask is: how long do you want to sustain this reaction? In other words, what is the total energy output?

For example, a Watt is one Joule per second. So, to sustain a 1 MW reaction for 1 year, the total energy is 1E6 J/s * 3.15E7 s/year = 3.15E13 J.

For Pu-239, we have 204 Mev per fission and we have 6.023E23./239 = 2.52E21 atoms/gm. So, the energy release per gram is 2.52E21 * 204 Mev/fission * 1.602E-13 J/Mev = 8.24E10 J/gm.

Therefore, to sustain 1 MW for 1 year, we will use 3.15E13 J / 8.24E10 J/gm = 382 gm of Pu-239 or 0.382 kg. This is only a small fraction of the total 60.76 kg needed for the fission reaction.

Finally, this is thermal energy. Our current light water reactors have about a 35% efficiency for conversion to electric power. So, you can take these numbers and essentially multiply by 3 to get a rough answer for the total Pu-239 needed: 3 x 60.76 = 182 kg. Rounding up, you would need roughly 200 kg for a long term sustained 1 MW fission reaction with a 35% conversion efficieny.

These calculations assume quite a bit and I wouldn't use these numbers to design a real reactor, but they should give you a ballpark idea of the masses involved.

Tremolo

New reactors that have never been activated are not particularly radioactive. Of course, once they are turned on, they are intensely radioactive while generating electricity. And after they are turned off, there is some residual radiation due to neutron activation of the reactor structure.

How much deadly radiation does an operating reactor spew out? That is complicated, but Anthony Jackson has a quick-and-dirty first order approximation:

r = (0.5*kW) / (d2)

where:

  • r = radiation dose (Sieverts per second)
  • kW = power production of the reactor core, which will be greater than the power output of the reactor due to reactor inefficiency (kilowatts)
  • d = distance from the reactor (meters)

This equation assumes that a 1 kW reactor puts out an additional 1.26 kW in penetrating radiation (mostly neutrons) with an average penetration (1/e) of 20 g/cm2.

As a side note, in 1950's era SF novels, nuclear fission reactors are commonly referred to as "atomic piles." This is because the very first reactor ever made was basically a precision assembled brick-by-brick pile of graphite blocks, uranium fuel elements, and cadmium control rods.

SPACE NUCLEAR REACTOR PROGRAM COSTS

Space Nuclear Power Program

This program aims to develop high-power, safe, reliable nuclear energy sources for manned and deep-space missions. Nuclear electric propulsion will open a new frontier of exploration in the outer solar system and allow manned missions to Mars and other places with difficult solar power problems.

A starting point might be the direct gas reactor studied for the Prometheus project, a 1MWt/200kWe reactor at 40-50kg/kW (7.5-11t). Another might be the heatpipe reactor SAFE-400, a 400kWt/100kWe reactor at unknown specific mass. Sodium-cooled designs in the 70kg/kW range are also possible. (Masses include radiators, conversion and power conditioning.)

The SAFE-30 project demonstrated simple, affordable ground testing of non-nuclear components. The Prometheus project demonstrated productive cooperation with Naval Reactors and related organizations to tap their nuclear technology expertise. Joining these approaches will allow the project to proceed immediately into materials testing and design optimization. The most urgently needed component is an experimental fast reactor for materials testing. Also critical will be a design process that focuses on modular power units so the same basic design can be used for a wide range of missions, presumably in the 50-100kWe range.

Costs are not straightforward to estimate. One baseline figure is the $4.2 billion estimated to develop the Prometheus reactor system. Let’s assume a 50% increase on that figure and use $6.3 billion for the development program; further assume hardware costs of $5000 per watt. Three demonstration units will be built: one for flight test (possibly on a later carrier flight), one for a NEP asteroid capture and one as a base power supply for a manned mission. The goal is a 50kWe power unit massing 2,000kg or better (40kg/kW) with at least 20-year useful life. Individual units could power NEP asteroid retrieval tugs or small ISRU operations; sets of four could power manned bases or deep-space probes. A second phase using knowledge gained from the first generation reactor program would aim to build power units of 1MWe and 10t mass range (~10kg/kW) for use on deep-space and interstellar probes, permanent bases and orbital manufacturing facilities. All future NEP missions would be able to use a proven, existing design and avoid developmental uncertainties.

Estimated costs:

$6,300m development program
$750m flight hardware
$2,115m margin
$9,165m total cost ($611m per year)

Alternate scenario: A fast-spectrum reactor is made available by another country or organization for materials testing. Majority of the design, testing and construction is outsourced to Naval Reactors and experienced contractors. Additional funding is provided by ESA and allied space agencies in return for access to flight hardware. Development program costs cut in half and a fourth power unit is built for ESA use. New costs:

$3,150m development program
$750m flight hardware
$1,170m margin
$5,070m total cost ($338m per year)
A REVIEW OF NUCLEAR ELECTRIC POWER

 This is a subject that's been stewing for a while now. I often see debates in comment sections over whether or not nuclear electric power is feasible in space. Only rarely do those arguing hold the same assumptions about what nuclear power actually means. As a result, these debates rarely convince anyone of anything beyond the stubborn natures of their opponents.

 The goal of this post is to briefly cover the range of commercial, military and scientific nuclear power systems ranging from a few kilowatts to over a gigawatt. I will follow up the (hopefully) useful background information in a later post with some fanciful projections and my usual call for unlikely investments in space.

Very briefly:
Nuclear energy is produced by the fission (splitting) of certain heavy atoms. This fission produces radiation which becomes heat which is then turned into electricity. The leftover heat and spent nuclear fuel must be dealt with. Shielding must be provided.

Radiation

 I won't get too deep into this subject, but there are several types of radiation. All of these types create challenges for material designs, since most materials become brittle with exposure to radiation. (Would you like to know more?)

 - Neutrons are nuclear particles emitted during fission; a certain amount of neutron radiation is needed to start up most nuclear reactors. Neutrons can be either fast (high energy) or thermal (fast neutrons that have been slowed by smashing into a moderator). Neutrons are a form of penetrating radiation; they are a neutral particle so electrical interactions have no effect, which means they can penetrate deep into many materials. Neutrons can also 'activate' other materials; once a neutron has been slowed down by many collisions with atoms, it eventually gets slow enough to be captured. This neutron capture process can form radioactive isotopes of common materials like iron or nitrogen. The best shielding for neutrons is either a lot of hydrogen (usually as water or polyethylene) or layers of neutron reflectors (lead, bismuth, beryllium; see below). It's important to note that the neutron environment inside a reactor must be carefully controlled for efficient operation, and there is definitely a lower limit as well as an upper limit for workable designs.
 - Gamma rays are very high energy photons (electromagnetic energy) produced either directly during fission, indirectly after a positron (anti-electron) is released and then annihilated with an electron, or indirectly by a beta particle colliding and emitting bremsstrahlung. Gamma is undesirable in a reactor because it is penetrating, very harmful and can be activating. Gamma rays can trigger the fission of deuterium, for example, causing the release of a moderate-energy neutron. The best shielding for gamma is a heavy metal like tungsten, but often a conductive liner (steel) and a bulk absorber (very thick concrete) are used.
 - Other particles (protons, alpha particles and heavier fission fragments) have different typical energy levels but are largely the same as far as a reactor is concerned. They are typically charged, can be slowed or stopped efficiently with metals and eventually become troublesome atoms trapped inside the fuel or coolant. Higher-speed fragments will also emit bremsstrahlung as they slow down, so essentially all nuclear reactors produce some level of gamma radiation.

Fuels

 The simplest fission fuel is an unstable isotope that spontaneously decays. Plutonium-238 is probably the most common example; this is used in RTG (radioisotope thermoelectric generator) units and radioactive heater units on deep space probes. Strontium-90 is another example, widely used in the Soviet Union in space and on Earth as a reliable power source for remote outposts like lighthouses. Some additional possibilities are Polonium-210 (powerful, dangerous, short life) and Americium-241 (long life, relatively high penetrating radiation output). These decay fuels are usually used as a simple source of heat, either maintaining operating temperature for some other device or powering a thermoelectric generator. The ideal unstable fuel would be something that decays only into alpha particles and stable products, producing no penetrating or activating radiation while having a decay rate high enough to be reasonably energy-dense yet low enough to operate for a few decades. No such material is known.

 Next is fissile material. A fissile isotope is one that can capture a low-energy neutron and then split. The four main examples are uranium-235 (naturally occurring), uranium-233 (bred from thorium-232), plutonium-239 (bred from uranium-238) and plutonium-241 (bred from plutonium-239 by way of Pu-240). Fissile material is useful for making nuclear weapons, so the production and use of these isotopes is very tightly controlled. Inefficient early reactors couldn't use natural uranium because the fissile content was too low; the U-235 had to be separated (enriched) to produce a fuel that would work properly. The same technology is used to make highly-enriched material for weapons, so again enrichment technology is tightly controlled. More modern reactor designs are more neutron-efficient, so they can use fuel that is less enriched or not enriched at all. Note that highly-enriched fissile material is very dangerous to handle or transport; too much of it in one place or accidentally exposed to neutron flux could lead to a chain reaction, a sudden spike in radioactivity and heat.

 Last is fertile material. A fertile isotope is one that can capture a neutron and convert into a fissile isotope, which can then be split with another neutron. Examples are uranium-234 (natural, makes U-235), uranium-238 (natural, makes U-239), thorium-232 (natural, makes U-233), plutonium-238 (artificial, makes Pu-239) and plutonium-240 (artificial, makes Pu-241). Fertile materials are relatively stable; they are not particularly radioactive nor will they do anything dangerous if you put a lot of it in one place. Most of them are flammable metals, but that is a chemical hazard rather than a nuclear hazard; burning U-238 is no more dangerous than burning magnesium (though the results are a bit more toxic). Fertile materials (including natural uranium) are far easier to transport safely than fissile or unstable materials.

Fuel Cycles

 The fuel by itself is only part of the story. The full fuel cycle is important to consider. Earth-based commercial power reactors can rely on an extensive infrastructure of mining, refining, enrichment, fabrication, reprocessing and disposal. Space-based reactors will have none of those advantages.

 Most commercial reactors and some military reactors are thermal, meaning their fast neutrons are moderated down to an energy level that allows for efficient capture in fissile fuel. Most such reactors require enriched fuel, which means fuel elements would be shipped from Earth until nuclear materials processing infrastructure is established in space. This is politically, economically and environmentally difficult, so Earth-style thermal reactors are not likely to be used in space for a long time if ever. One possible exception is CANDU, a heavy water moderated thermal reactor that can burn natural uranium (and a lot of other radioactives) as fuel. Interestingly, ice on Mars is significantly richer in heavy water than on Earth thanks to atmospheric losses over the eons; this might be a reasonable medium-term approach, particularly since the design does not require massive pressure vessels.

 Many research and medical reactors and some military reactors are fast, meaning their neutrons are used as they are produced. Fast reactors are often called breeder reactors, because they turn fertile material into fissile material which is then split for energy. An initial 'spark plug' of fissile material is used to generate enough neutrons to get the reactor going, then the majority of the fuel is natural uranium, natural thorium or some other fertile material. The earliest breeder reactors were used to generate fissile plutonium for the production of nuclear weapons, but current designs using thorium are specifically intended to prevent any application to weapons (proliferation-safe). Small-scale research and medical reactors are used to irradiate materials to make useful isotopes for medical imaging, cancer radiation therapy and RTG power cores. Thorium-based reactors are particularly interesting for space colonization since they could be fueled using rudimentary refining techniques and produce little waste.

Moderators, Coolants, Poisons and Reflectors

 The neutron environment inside a reactor is critically important to safe and efficient operation. Four types of materials are present in most reactors and all of them affect how neutrons behave. Many materials have more than one property from this group.

 A moderator is some material that can absorb energy from neutrons without stopping them entirely. A coolant is something that can carry heat efficiently and hopefully is not too corrosive or degraded by radiation. By far the most common material in both cases is plain water thanks to its high hydrogen content, excellent heat capacity and reasonable thermal conductivity. Commercial power reactors are almost exclusively thermal, either pressurized water or boiling water types, which use purified light water to moderate neutrons and to carry heat out of the core. Care must be taken that the design is passively safe; that is, if the coolant were to boil suddenly then the reactor should naturally reduce its power output without intervention. For an example of passive safety, check out TRIGA (training, research, isotopes, General Atomics) reactors; operating safely since 1958 these are the only reactors licensed for unattended operation.

 The two other moderators in common use are heavy water (water made of oxygen and deuterium) and graphite (pure carbon). A third used in a handful of experimental and military reactors is lithium-7 (with or without beryllium), typically as part of a molten salt.
 The main heavy water reactor design is CANDU, which uses it as both moderator and coolant. Derivative designs use separate light and heavy water systems, with the heavy water providing mostly moderation and the light water providing mostly cooling. Heavy water is used because the hydrogen already has an extra neutron and is much less likely to capture another one. It does happen, so heavy water reactors produce small amounts of tritium.
 Graphite always uses a separate coolant since it is a solid. Graphite was used in the first reactor (the Chicago pile) and in many others since then due to its stability, mechanical strength, incredible temperature tolerance and ready availability. As a solid, graphite is susceptible to lattice defects called Wigner energy; this led to the Windscale fire before it was understood, though most modern reactors operate above the annealing temperature of carbon so this is not a concern.
 Beryllium is a suitable moderator if you only look at physics. Unfortunately it's expensive and extremely toxic, so it is not normally used on its own. In a mix with lithium-7 and fluorine it forms the coolant/moderator FLiBe used in molten salt reactors.

 Fast reactors need to have as little moderation as possible (or at least a predictable and controllable amount) inside the core. That means they need to use coolants that are poor moderators or are neutron-transparent. Common materials are sodium and lead (yes, lead; it's great at absorbing gamma radiation but it tends to reflect neutrons). Some molten salt reactors are also fast reactors and may use zirconium and sodium fluorides instead of beryllium and lithium fluorides in the salt mix. It's worth noting that some graphite-moderated reactors are cooled with molten lead or sodium, since using a coolant that is a poor moderator means the reactor's behavior is more predictable during transient problems with coolant flow.
 Carbon dioxide has been used as a coolant (with moderating properties) in the past, and may be used again as a supercritical fluid. This requires fairly high pressures, but learning how to handle supercritical CO2 would have useful applications for cooling or refrigeration elsewhere in space.
 Helium has also been used as a coolant and is proposed to be used in some very high temperature reactors as both the coolant and the working fluid for the turbine. Because it resists activation, if a reactor core uses fuel elements that trap their own fission products then the helium can pass directly through the core and into the generator turbine with no intermediate heat exchangers; this requires very high temperature turbine materials but leads to superior efficiency and compact, simple design.
 Zirconium is nearly transparent to neutrons. Many fuel assemblies use Zircalloy, an alloy that is at least 95% zirconium, to allow fast neutrons to escape the fuel pins and to allow moderated neutrons back into the fuel to trigger more fission. A common fuel is uranium zirconium hydride, with zirconium alloyed for structural strength and hydrogen adsorbed for inherent moderation.

 A poison is some material that absorbs neutrons very efficiently. Examples include lithium-6, boron, hafnium, xenon-135 and gadolinium. These are used in control rods and safety systems or are produced naturally by nuclear reactions within the core. Over time, neutron poisons build up in the fuel; the dynamics of this are complex but neutron poisons are the main reason why uranium fuels only burn about 2% of their potential in one pass through a reactor. The poison byproducts have to be removed for the fuel to become usable again. Xenon is the most important of these over short timescales.
 Hafnium, boron and gadolinium are common materials for control rods. These devices allow operators to precisely control how many neutrons are flying around at a given time inside the core and can also be used as an emergency shutdown device. Control rods may be suspended above the core by electromagnets; during a loss of electrical power the rods will naturally fall into the core and stop primary activity. Soluble boron salts are used as an emergency shutdown tool in water-moderated reactors; the salt is injected into the moderator or coolant loop, causing an immediate and dramatic reduction in neutron flux and stopping the reactor's primary activity. Radioactive byproducts will still produce significant heat and radiation for hours to days, so additional safety features like auxiliary cooling are required.

 A reflector is a material that reflects (elastically scatters) neutrons. Primary examples are beryllium, graphite, steel, lead and bismuth. This is another reason why graphite was used in early reactors: a layer of solid graphite blocks around the outside of the pile reflected neutrons back into the core, reducing the required size of the core and reducing the required neutron shielding.
 Many reactor designs intended for use in space rely on controllable reflectors rather than controllable poisons; the reactor core would be safe (subcritical) by design, only able to operate when neutron reflectors were properly placed. That allows a reactor to be launched before activation, meaning the potential radioactive release during a launch accident would be minimized.
Some other designs use reflectors to boost reactivity near the end of life for a given batch of fuel, or otherwise as an alternative to poisons for control. An example is the SSTAR design, which would use a movable reflector to move the active region of the reactor through a fuel load over the course of 30 years rather than refueling every ~18 months. If the reflector were to fail then the reactor's output would taper off to nearly nothing over a few days. By relying on reflectors rather than poisons, the reactor requires a lower level of neutron flux to operate and can use less efficient (less or not enriched) fuels.

Turning heat into electricity

 Once you have a steady supply of heat, you have to put it to use somehow. The laws of physics are singularly unforgiving about energy conversion. For every useful unit of electricity produced you will have to deal with two to five units of waste heat in any practical design. Less efficient options are always available.
 In space we don't have access to free-flowing rivers or oceans of water to use as coolant; without conduction or convection we can rely only on radiation. Thermal radiators are significantly more efficient at high temperature, so the higher our core reactor temperature the better for a free-flying spacecraft. (Radiative output scales as the fourth power of temperature, so a small increase in temperature causes a very large increase in radiator output.) The temperature limit for a reactor is usually based on either the primary coolant or the fuel material, around 900-1000 °C for zircalloy cladding and possibly higher for ceramic or carbide fuel elements. Molten salt or gas-cooled reactors could go higher, while water-cooled reactors are a fair bit lower. (Water-cooled reactors use water at high pressures, so the boiling point of the coolant is typically several hundred °C.) I won't get into the physics and mechanics of radiators here other than to say they are similar to solar panels in terms of areal density, pointing and deployment. The size of a radiator system depends very strongly on the temperature of the coolant and whether there is a large hot object (like Earth) nearby.
 For a surface base with access to a large thermal mass (dirt, ice, etc.) there may be the option of process heat. Some of the waste heat from the reactor can be used to do useful work like melting ice, heating greenhouses or powering thermochemical reactions like the sulfur-iodine process for producing hydrogen. From the perspective of the electrical generation system this is still waste energy, but these uses increase the overall efficiency of the system. This kind of cogeneration greatly increases the required radiator area in free space, so although it seems counterintuitive it may not be mass-efficient to use waste heat for chemical processes on an orbital station. Rather, it may actually take less mass to produce electricity (at 20-30% efficiency, but with high-temp radiators) and use it directly in electrochemical processes vs. thermochemical processes. Each individual mission / craft / architecture is unique and may come down on either side of the line.

 So, with a source of heat (reactor coolant loop) and a sink of heat (radiator coolant loop) we can put a heat engine between the two and extract useful energy. The most basic approach is to use the thermoelectric effect (like a Peltier cooler), directly converting heat into an electric current. These devices typically have no moving parts and are highly reliable, but are poorly scalable and only modestly efficient. RTGs use these, as have some flown reactors on Soviet satellites.
 By far the most common method on Earth is to use a steam turbine in the Rankine cycle. Heat from the reactor loop boils water into steam in a steam generator, which is passed through a turbine to rotate a shaft. The depleted steam is recondensed into water, passing low temperature waste heat into the cooling loop. This would be extremely inefficient in space as the low waste temperature would require enormous radiators.
 A promising technique is to use the Brayton cycle in a reactor with a gas coolant. The most likely of these is helium, since it is very stable and nearly impervious to neutrons. A space-optimized Brayton cycle reactor (see for example project Promethius) would circulate helium through the core and pass it directly through the turbine, with no intermediate loops or heat exchangers. This is possible only because helium does not become radioactive inside the core, but it also requires that the fuel elements contain all fission products; any fuel leak would contaminate the turbine. A cycle using steam without a condenser and boiler is also possible.
 Surface bases with abundant heatsink potential could use a Combined cycle. This is a high-temperature Brayton cycle turbine whose waste heat is still high enough to run a Rankine cycle turbine of one or two stages. The Rankine cycle exhaust heat is quite low temperature and would have to be rejected into a body of water (or some other liquid) or pumped into the ground like a reverse geothermal system. The best case would be a mixed-use system that provides electricity, industrial process heat for thermochemistry and ice melting, and life support heat for maintaining livable habitat conditions. Using an array of greenhouses as your low-temperature radiator system would be ideal. The drawbacks of a system like this are complexity, need for available heat sinks and the fact that each part of the process relies on all other parts maintaining a certain pace. If you want to have electricity while your industrial processes are not running then you need an alternate heat sink to replace those processes.

Dealing with waste

 Nuclear reactions produce radiation. Some of that radiation ends up activating parts of the reactor, which means those parts become radioactive themselves. Pumps, valves, pipes, pressure vessel walls, all of the structure in the core of a reactor will become radioactive over time. This material generally can't be reprocessed into a nonradioactive form. (It's possible but would be extremely expensive.) This is usually low to medium grade nuclear waste and the usual solution is to slag it, encase it in concrete and bury it. That probably works for surface bases on bodies with no 'weather' cycle, but it would be a no-go for active worlds like Titan / Io or icy worlds like Europa. Even then, there has to be some standardized way to indicate to future generations that there is something dangerous buried there. For craft and colonies that can't bury their waste, they would have to find some place to send it safely. This remains an unsolved problem on Earth; perhaps a waste repository and reprocessing center on the moon might some day be viable, provided shipments of waste are ever allowed to be launched.
 The fuel itself produces radioactive byproducts as a result of fission. These are mostly actinides, but there are some radioactive gases like iodine as well. On Earth we generally store fuel elements indefinitely in cooling ponds or eventually in dry casks. Fuel elements can be reprocessed, meaning the component materials are separated, byproducts are filtered out and the repurified fuel is recast into new fuel elements. The actinide wastes can be burned in certain types of reactor (usually the same sort that can burn thorium, but some fast spectrum reactors are designed for waste destruction). The old liners or shells and any equipment used in fuel processing will generally be considered high-grade nuclear waste; this is treated much like other types of waste but will be radioactive for a much longer time due to contamination with radioactive isotopes. Fuel reprocessing facilities are a proliferation concern because they allow for the extraction of weapons-grade plutonium from spent uranium fuels. Thorium cycle reactors would be politically easier because it is far more difficult to get anything of military interest out of the fuel.

Shielding

 Radiation from an operational reactor is damaging to people, electronics and structures. Shielding must be provided to mitigate this damage. Earth reactors solve this problem using cheap, bulky, heavy material in abundance. Usually the reactor core is placed inside a containment building; the building is a thick stainless steel liner and several meters of concrete all around. Openings usually take sharp turns so there is no line of sight from the core to the outside world; radiation doesn't turn corners. (It does scatter, so it's still not simple.)
 Free-flying reactor designs don't have to worry about contaminating a planet full of voters during a system failure. These usually have the reactor at one end of the ship on a long truss, with a small shield plug (a shadow shield) that protects the rest of the spacecraft. Ships like these are easy to see coming if you have gamma detectors. They are great for deep space exploration, but they make bad neighbors and are difficult to handle for docking maneuvers since a small misalignment could kill everyone on the other ship.

Possible scenarios - surface base

 Let's look at the simplest case first. This is a manned surface colony with basic industry already online. Base metals (iron, nickel, aluminum) and bulk material (dirt) are available. First the coolant system is built (or installed) and tested. Next a containment pit is dug, then lined in concrete/sintered or pressed regolith/etc. Nickel-iron (simply iron from here on) blocks are piled up like bricks and welded together. A self-contained core unit is assembled on Earth and shipped in one piece, placed into the pit and connected to the radiator system. The pit is covered with iron sheets or beams with a layer of concrete/sinter/etc. then buried. The core unit is not activated until it is installed, so it is not radioactive and has no unusual handling restrictions. It would be designed to run for 20-30 years unattended, with no maintenance access possible; it would probably be limited to a few tons mass at most (~6-8t; 300-500kg fuel mass) and up to a few hundred kilowatts of electricity. New core assemblies would be shipped about every decade to maintain redundancy, more often if the colony's energy needs are growing. Cores would be in the few hundred million dollar range (plus shipping); comparable cores on Earth can be built for tens of millions but they don't need to survive a reentry accident and can be repaired on-site. Lifetime power generation (20 years, 95% availability) would be about 33 GWhr of electricity.
 The whole assembly would be several meters underground, safe to stand above while operating. A coolant failure would leave the reactor hot but safe, which means the coolant system could be rebuilt or replaced without needing to do anything to the reactor core. In the event of a serious problem like a core meltdown, any released radioactive gases would escape into space or be diffused through the (already unbreathable) atmosphere. Particles could be a bigger problem; on Mars they would be swept away in the next dust storm but on the Moon they would likely stick around for a while unless they were small enough for electrostatic scattering. Still, no crops would be contaminated.

 The next step would be an accessible reactor core that can be refueled. Fuel elements could be shipped from Earth or manufactured locally. The containment structure would not be much different, but the core could be bulkier; this would allow for things to be shipped in pieces and assembled on-site. Telerobotics would be ideal for this work, but the initial construction could be safely done in person. If the local industry is capable of building small superalloy pressure vessels then something like the CANDU approach can be used, where small tubes with fuel run through a large 'tub' of moderator+coolant at manageable pressure. Regardless, a gigawatt-sized pressure vessel is a tall order for local industry (many nations on Earth couldn't build a reactor pressure vessel today) and for in-space shipping; one way or another the approach will have to be modular and scalable. Perhaps an array of many reactor cores will feed a small number of high-power turbines. Core units will likely be in the range of a few hundred kW to about one MW each (5-25t including core coolant but not turbines).
 This modular approach would allow the colony to transition into locally-manufactured fuel elements and other parts. These might initially be reprocessed fuel from earlier cores or they could start right away with locally mined material.

 Beyond that, once the colony has the capacity to make high-performance turbines, pumps, pressure vessels, fuel assemblies, etc. then they will essentially be self-reliant.

Bimodal NTR

Nuclear Thermal Rockets are basically nuclear reactors with a thrust nozzle on the bottom. A concept called Bimodal NTR allows one to tap the reactor for power. This has other advantages. Since the reactor is running warm at a low level all the time (instead of just while thrusting) it doesn't have to be pre-heated if you have a burn coming up. This reduces thermal stress, and reduces the number of thermal cyclings the reactor will have to endure over the mission. It also allows for a quick engine start in case of emergency.

In the real world, during times of disaster, US Navy submarines have plugged their nuclear reactors into the local utility grid. This supplies emergency electricity when the municipal power plant is out. In the science fiction world, a grounded spacecraft with a bimodal NTR could provide the same service.

Dusty Plasma Fission Reactors

This is from A Half-Gigawatt Space Power System using Dusty Plasma Fission Fragment Reactor (2016)

Rodney Clarke and Robert Sheldon were working on a fission-fragment rocket engine when they noticed a useful side-benefit.

There is a remarkably efficient (84%) electrical power plant called a Magnetohydrodynamic Generator (MHD generator). They also have the virtue of being able to operate at high temperatures, and have no moving parts (which reduces the maintenance required and raises reliability). A conventional electrical power generator spins a conducting copper wire coil inside a magnetic field to create electricity. An MHD generator replaces the solid copper coil with a fast moving jet of conducting plasma.

Because many designs for fusion rocket engines and fusion power plants produce fast moving jets of plasma, MHD generators were the perfect match. Ground based power plants just sprayed the jet of fusion plasma into the MHD.

Fusion spacecraft could be bimodal. An MHD generator could be installed in the exhaust nozzle to constantly bleed off some of the thrust power in order to make electricity, this was popular with inertial confinement fusion which need to recharge huge capacitors before each fusion pulse. Alternatively the MHD generator could be installed at the opposite end of the fusion reaction chamber. The fusion plasma goes down out the exhaust nozzle for thrust, but it can be diverted upwards into an MHD generator for electrical power.

Finally getting to the point, Clarke and Sheldon realized that a fission-fragment rocket engine also produces a jet of plasma. Therefore, it too can be bimodal with the addition of an MHD generator.

Cutting to the chase, they would have a jaw-dropping specific power of 11 kWe/kg! The rough design they made had a power output of 448 megawatts and a total mass of 38,430 kg (38 metric tons).

Dusty Plasma Power Reactor
Specs
Power Output448 MW
Specific Power11 kWe/kg
Mass Schedule
U235 Fuel4.27 kg
Am242m Fuel1.25 kg
Moderator9,424 kg
Moderator Heat Radiator28,000 kg
Generator Heat Radiator1,000 kg
TOTAL38,430 kg

Nuclear MHD

This design combines open-cycle gas-core nuclear thermal rockets with the sophistication of a Magnetohydrodynamic (MHD) generator. OCGC NTRs can put out much more thermal energy than a solid core reactor, since the latter has to worry about melting. And MDH generator not only have great efficiencies and no moving parts, their core element is a stream of hot gas. The hotter the better.

NUCLEAR PROPULSION SCHEMES USING MHD

      I hope this discussion of MHD energy conversion will not seem out of place in a meeting on advanced reactor concepts. The· fact is that to be useful in space, the MHD generator needs high temperatures of the sort that can only be produced by advanced types of reactors. I hope that today I can make the case, or. at least establish a reasonable possibility, that to be useful in space the advanced reactor concepts in turn need the MHD generator.

     Although I think many of -you know how an MHD generator works, let me review the basic principles briefly. Figure 1 illustrates the principle and compares it to that of a turbine generator. The basic principles are the same in the two cases. These are that expansion of a gas produces motion of a conductor and the motion of a conductor through a magnetic field generates an electromotive force. In the case of the MHD generator, the gas is itself an electrical conductor and is moved through the magnetic field. Observe that the MHD generator performs the function of both a turbine and a conventional generator. The function that it performs best is that of the turbine. In fact, it is really more useful to think of it as a high temperature turbine rather than as an electrical generator. In practice, an MHD generator would resemble a rocket nozzle with a field coil wrapped around it. It would have no hot, highly stressed, moving parts; no close tolerances; and the only solid parts, namely the walls, are readily accessible for external cooling, as are the walls of a conventional rocket. As a result, it can handle temperatures and pressures like a rocket nozzle and can stand erosive and corrosive atmospheres which would completely destroy any other type of energy conversion device in a very short time. Also as we will see later, it can produce very large amounts of power per unit volume and per unit weight.

     The primary limitation on how and where one uses an MHD generator is a low temperature limit. This is because at the present time the only way that we are sure is practical for rendering a gas conducting is introducing into it a few tenths of a percent of an alkali metal seed material and then heating it. Results obtained in combustion products, but typical for all gases, are shown on Figure 2. The points are experimental; the solid line is theoretical. Observe that the conductivity is a very steep function of temperature. In practice we find that below about 20000°K, the exact value depending upon just what gas is used, the conductivity becomes too low to be useful. Observe that a few hundred degrees change in temperature can bring about an order of magnitude change in conductivity. This in turn can bring about order of magnitude improvement in the performance of an MHD converter.

     This illustrates why you should not be misled by statements in the literature (or photographs of our devices) into assuming that MHD generators are intrinsically very large and heavy. In our struggle to fit MHD generators into existing technology, we do indeed make them as large and as heavy as the traffic will bear. But the technology which you people are discussing here can move us a very long way up this exponential curve. For example at just about 40000°K in hydrogen an MHD generator containing one cubic meter of volume could generate as much electric power as the sum total of all of the power plants in this country, i.e., about 200,000 megawatts.

     Now given a turbine which has no temperature or power limit and can handle any atmosphere, the next question is, how exactly can it be usefully employed in space? The answer (as is usual for questions of this nature) is that there are a virtual infinity of possibilities. The real problem of course is trying to decide which, if any, of these possibilities arc really worth pursuing. I obviously should not take the time here to discuss them all. So I will discuss rather briefly a few schemes which, I hope, illustrate the range of possibilities.

     In devising these propulsion schemes, one of the ground rules has been that it should not be necessary to retain fuel within the reactor. Desirable perhaps, but NOT necessary. Figure 3 illustrates a system in which it is obviously not necessary. What we have here is essentially a conventional closed thermodynamic power cycle which is using the propellant as its heat sink. In effect what happens is that heat is transferred between the reactor and the propellant by means of solid surfaces up to the maximum temperature that solids can be used, and above that it is transferred by means of the MHD generator and the accelerator. That is, energy is transferred by convection and conduction up to perhaps 2000°K; and above that, energy and also momentum is transferred electrically. Compare this with concepts such as the glow plug and the coaxial jet in which radiation is used to transfer energy at temperatures above the solids limit. In this respect, I think the MHD scheme has two things to its advantage. First of all, without trying very hard one can make the energy delivered at the electrode wall of an MHO generator at a given temperature be orders of magnitude greater than the energy per unit area delivered by even blackbody radiation. The second advantage of the MHD scheme is that wall materials and structures which are transparent to DC electric power arc a great deal easier to find than materials which are transparent to optical electromagnetic frequencies.

     Figure 4 is a simplification of the scheme shown in Figure 3. Here the same gas is used as working fluid in the reactor and MHD generator and as the propellant. As a result, the compressor and the heat exchanger are eliminated. Here we depend upon the fact that at a generator exit temperature of 2000 to 2500°K practically all of the fuel will be condensed and can be recovered by a gas-liquid separation technique without cooling the gas any further.

     Figure 5 illustrates the point that use of an MHO converter can do more than simply provide a way around the fuel containment problem. Shown here is the open cycle propulsion scheme illustrated in Figure 4, except that the power output of the generator is not put back into the propellant but rather used in an external air accelerating device. Obviously this is not a propulsion system for deep space. What we have here is the nuclear MHO analog of a turbo-rocket. The propulsive efficiency of such an arrangement is much higher than that of a rocket alone, assuming you are in the appropriate range of flight velocity through the atmosphere. In the case of a nuclear MHD turbo-rocket this appropriate velocity range could be from zero right up to the satellite velocity. Moreover, an electric ram jet might turn out to be a much easier device to get good performance out of than a comparable chemically fueled ram jet when operating in the hypersonic velocity range. There are a number of reasons for this, but what they all boil down to is just that electricity is a more highly organized or available form of energy than is chemical energy.

     Figure 6 shows the kind of specific impulse one might expect to get from the type of propulsion systems shown in Figures 4 and 5. This is shown as a function of the pressure ratio across the generator and the top temperature produced by the reactor. "Unaugmented rocket" corresponds to the system shown on Figure 4. The "augmented rocket" corresponds to the system shown on Figure 5. For the latter, the specific impulse shown is a weighted average over the flight velocity from zero up to the satellite velocity, and a range of values is shown corresponding to a range of assumptions about the efficiency of the electric air accelerator, or electric ram jet. ηa / ηt = 1 corresponds to an accelerator efficiency equal to the thermal efficiency of a rocket nozzle, that is about 70%; then ηa / ηt = 0.5 corresponds to an efficiency of about 35%.

     The closed cycle shown in Figure 3 may also be operated either as a pure rocket or as an air augmented rocket, and the performance that would result is shown in Figure 7. The closed cycle would be a good deal heavier than the open one. However, I believe that in sizes corresponding to a thrust level of 100 tons and up, both systems could be made to have a thrust to weight ratio substantially in excess of one.

     In order to get a specific impulse greater than 2000 to 3000 seconds in space it is necessary to consider a system which uses a radiator. This is true whether one is considering a nuclear-MHD scheme or a nuclear reactor working alone. As is well known, a key, if not the key, to making a system of this type with a reasonable thrust to weight ratio is attaining a high heat rejection temperature. In addition, gains of up to a factor of five can be made simply by making the cycle more efficient. Presently conceived space electric power supplies have a temperature limit set by their reactor and conversion device. By using a gas core reactor and an MHD generator there would no longer be a limit on top cycle temperature. Then the compressor and the radiator temperature could rise accordingly to what is now the top cycle temperature. Eventually it should be possible to also make an MHD compressor, and then only the radiator would be a solids limited device.

     However, even with "solids limited compressors" we can do orders of magnitudes better than presently conceived electric power systems as is shown on Figure 8. Here power per unit radiator area is displayed vs top cycle temperature for a variety of radiator and generator temperatures. It shows that we ought to be able to do at least a hundred times better than SNAP 8 in terms of power per unit radiator area. Assuming that the weight of all cycle components scales by the same factor, and there is reasonable grounds for supposing that it might, the result would be a propulsion system for· interplanetary flight whlch would be very hard to heat.


CONCLUSION

     Now that Tom Brogan has shown you something of our present. work, I would like to make one or two further comments arid then summarize.

     First of all I imagine that in the figures that were. shown you observed the very massive field coils in our present devices. I would like to assure this audience again that this is not an inevitable feature. First, the devices you have seen were designed for combustion product gases which produce a rather well defined temperature and hence conductivity. Now as we saw earlier, conductivity is an exponential function of temperature, and the size of the generator is pretty much proportional to the gas conductivity. Secondly, very large reductions in the size and weight of the magnet can be made by cryogenic cooling, and most of these propulsion systems would have an abundance of hydrogen available for this purpose.

     Figure 17 illustrates these points. On it coil mass is plotted as a function of the size of the generator in terms of gross megawatts of output. The top curve is for a combustion fired generator in which the coil dissipates 30% of the gross power or 6% if the coil is liquid oxygen cooled. The Mark V generator which Tom Brogan discussed falls slightly below this curve because its dissipation is closer to being 50%. You observe that it is a break-even generator. If it had been made much smaller, all of the copper in the world would not have made it self-excite. The lower curve shows what would happen if the gas conductivity and velocity is increased as it would be in a nuclear system at 2500°C using hydrogen as the working gas. Here the dissipation is 10% at room temperature, or 1% if the coil is cooled enough to produce a factor of 10 increase in conductivity. This could easily be accomplished using liquid hydrogen. In fact, much greater gains should be possible. Observe here that as long as the power level is greater than 10 megawatts the coil will weigh on the order of 1 ton. At power levels on the order of 1000 megawatts and up, this corresponds to an extremely small weight per kilowatt of energy handled.

     In summary then, there is no reason why an MHD generator cannot be made light enough. for the kind of high thrust propulsion systems which we have been discussing here.

     Figure 18 attempts to summarize the kind of systems that we think we can build using an MHD generator and advanced reactors on a map of specific impulse vs engine thrust to weight. The curve labeled "gas core propellant cooled" corresponds to systems as illustrated in Figures 3 and 4. The curve labeled "air breathers" corresponds to a system such as that shown on Figure 5, but includes also schemes using a closed as well as an open cycle. The vertical lines labeled "radiators" correspond to systems such as were discussed in connection with Figure 8. This figure gives the impression that for boosting off the surface of the earth, or any other body, air (or "atmosphere") breathers are hard to beat, and that for interplanetary flight into space, radiating systems are hard to beat if you can get up to power to weight ratios equal to, or exceeding 1 kilowatt per kilogram. However, the main point that I want to make with this curve is just, that by combining an MHD generator with advanced high temperature reactors, we can make propulsion systems whose performance is comparable to what you can hope to get in any other way. In particular they are comparable to, or perhaps better than, what you could hope to get with a gas core reactor alone…and you do not have to solve the fuel containment problem in order to get it!

From NUCLEAR PROPULSION SCHEMES USING MAGNETOHYDRODYNAMIC CONVERSION TECHNIQUES
in PROCEEDINGS OF AN ADVANCED NUCLEAR PROPULSION SYMPOSIUM page 286 (1965)

Nuclear Piston Engine

This is a weird one. It is analogous to an automobile internal combustion engine, except using uranium fission instead of burning gasoline. Thematically it is sort of a cross between steampunk and atompunk.

THE NUCLEAR PISTON ENGINE

PREFACE

     The fundamental objective of this work has been to gain an insight into the basic power producing and operational characteristics of the nuclear piston engine, a concept which involves a type of pulsed, quasi-steady-state gaseous core nuclear reactor. The studies have consisted primarily of neutronic and energetic analyses supplemented by some reasonably detailed thermodynamic studies and also by some heat transfer and fluid mechanics calculations.
     This work is not to be construed as being a complete exposé of the nuclear piston engine's complex neutronic and energetic behavior. Nor are the proposed power producing systems to be interpreted as being the ultimate or optimum conditions or configurations. This dissertation is rather a beginning or a foundation for future pulsed, gaseous core reactor studies.
     Nuclear piston engines operating on gaseous fissionable fuel should be capable of providing economically and energetically attractive power generating units.
     A fissionable gas-fueled engine has many of the advantages associated with solid-fueled nuclear reactors but fewer safety and economical limitations. The capital cost per unit power installed (dollars/kwe) should not spiral for small gas-fueled plants to the extent that it does for solid fueled plants. The fuel fabrication (fuel and cladding, spacer grids, etc.) is essentially eliminated; the engineering safeguards and emergency core cooling requirements are reduced significantly.
     As a circulating fuel reactor, the nuclear piston engine's quasi-steady-state power level is capable of being controlled not only by variations in the neutron multiplication factor but also by changes in the loop circulation time. It is shown that such adjustments affect the delayed and photoneutron feedback into the reactor and hence provide an efficient means for controlling the reactor power level.
     The results of the conducted investigations indicate good performance potential for the nuclear piston engine with overall efficiencies of as high as 50% for nuclear piston engine power generating units of from 10 to 50 Mw(e) capacity. Larger plants can be conceptually designed by increasing the number of pistons, with the mechanical complexity and physical size as the probable limiting factors.
The primary uses for such power systems would be for small mobile and fixed ground-based power generation (especially for peaking units for electrical utilities) and also for nautical propulsion and ship power.

CHAPTER I: INTRODUCTION

Description of Engine Operation

     The investigated nuclear piston engines consist of a pulsed; gaseous core reactor enclosed by a moderating-reflecting cylinder and piston assembly and operate on a thermodynamic cycle similar to the internal combustion engine. The primary working fluid is a mixture of uranium hexafluoride, UF6, and helium, He, gases. Highly enriched UF6 gas is the reactor fuel. The helium is added to enhance the thermodynamic and heat transfer characteristics of the primary working fluid and also to provide a neutron flux flattening effect in the cylindrical core.

     Two-and four-stroke engines have been studied in which a neutron source is the counterpart of the sparkplug in the internal combustion engine. The piston motions which have been investigated include pure simple harmonic, simple harmonic with dwell periods, and simple harmonic in combination with non-simple harmonic motion.

     Neutronically, the core goes from the subcritical state, through criticality and to the supercritical state during the (intake and) compression stroke(s). Supercriticality is reached before the piston reaches top dead center (TDC), so that the neutron flux can build up to an adequate level to release the required energy as the piston passes TDC.

     The energy released by the fissioning gas can be extracted both as mechanical power and as heat from the circulating gas. External equipment is used to remove fission products, cool the gas, and recycle it back to the pistpn engine. Mechanical power can be directly taken by means of a conventional crankshaft operating at low speeds.

     To utilize the significant amount of available energy in the hot gas, an external heat removal loop can be designed. The high temperature (~1200 to 1600°K) HeUF6 exhaust gas can be cooled in an HeUF6-to-He heat exchanger. The heated He (~1000deg;K to 1400deg;K) is then passed either directly through gas turbines or is used in a steam generator to produce steam to drive a turbine.

     The total mechanical plus turbine power per nuclear piston or per cylinder ranges from around 3 to 7 Mw(e) depending on the selected piston engine operating characteristics and the external turbine equipment arrangement. Thus, power generating units of from 10 to 50 Mw(e) capacities would consist of a cluster of 4 to 8 pistons in a nuclear piston engine block, Larger power plants can be conceptually designed by increasing the number of pistons with the mechanical complexity and physical size as the probable limiting factors. Overall efficiencies are as high as 50% implying heating rates of around 6800 BTU/kr-hr. Fuel costs are presently estimated as being below $0.20 per million BTU or around 1.4 mills/kwe-hr (based on fiscal year 1974 costs).

Applications and Highlights of the Nuclear Piston Engine Concept

     Already-developed nuclear reactor concepts like pressurized water reactors (PWRs), boiling water reactors (BWRs), and high temperature gas-cooled reactors (HTGRs) can be economically competitive only when they are incorporated into large capacity power systems. Given the fuel cycle costs and operation and maintenance costs for these reactor concepts, it is their high capital costs which economically prevent them from being used on a scaled-down basis for 20-50-100 Mw(e) units. The cost per unit power installed (dollars/kwe) for scaled-down units operating on these already-developed solid-fueled core concepts would be extremely high.

     A nuclear piston engine power plant, however, will not require the sophisticated and costly engineered safeguards and auxiliary systems associated with the solid-fueled cores of current large capacity nuclear power plants. The inherent safety of an expanding gaseous fuel can be engineered to take the place of many of the functions of the safeguards systems. Hence, while gaseous core, nuclear piston engine power plants would possess relatively high costs per unit power installed as compared to comparably sized fossil-fueled units, their capital costs per unit power installed would be considerably less than for any scaled-down nuclear units operating on current solid-fueled core concepts.

     In addition to decreased capital costs, the nuclear piston engine should possess fuel cycle costs which are about half the fuel cycle costs of most present large capacity nuclear plants. Fuel fabrication costs, transportation costs to and from the fabricator, and transportation costs to and from the reprocessor will all be eliminated. These costs typically comprise from 40 to 50% of the current nuclear fuel cycle costs (based on fiscal year 1974 costs).

     Thus, it would appear as if power production costs for a nuclear piston engine will not only be less than those of conventionally fueled peaking units, but that they should also approach the power production costs of large-scale fossil and large-scale nuclear-fueled plants.

Fusion Reactors

A fusion reactor would produce energy from thermonuclear fusion instead of nuclear fission. Unfortunately scientist have yet to create a fusion reactor that can reach the "break-even" point (where is actually produces more energy than it consumes), so it is anybody's guess what the value for alpha will be.

The two main approaches are magnetic confinement and inertial confinement. The third method, gravitational confinement, is only found in the cores of stars and among civilizations that have mastered gravidic technology. The current wild card is the Polywell device which is a type of inertial electrostatic confinement fusion generator.

Fusion is even more efficient than fission. You need to burn 0.01 grams of fission fuel per second to generate 1000 megawatts. But among the most promising fusion fuels, they start at 0.01 grams per second, and can get as low as 0.001 grams per second. You can find more details here.

In science fiction, a fusion reactor is commonly called a "fusactor".

FUSION CONTAINMENT

There are five general methods for confining plasmas long enough and hot enough for achieving a positive Q (more energy out of a reaction than you need to ignite it, "break even"):

From HIGH FRONTIER by Philip Eklund
WHY FUSION IS LIKE SPACE SETTLEMENT

Recently I tweeted the following remark: “It is worthwhile to reflect on how difficult an engineering challenge that fusion has proved to be. We have the existence proof of gravitational confinement fusion, but inertial and magnetic confinement remain just out of reach.” This was in response to the article 50 years on, nuclear fusion still hasn’t delivered clean energy by Maria Temming. I follow fusion research with some interest, and indeed in One Hundred Years of Fusion I discussed some of the many fusion projects being conducted around the world today.

Fusion has proved to be a devilishly difficult engineering challenge, though the basic idea is very simple – one so simple that even a child can understand: if you force  together simple atoms they can merge into a more complex atom, and when they do so they will release a lot of energy. Therefore, if we can force together simple atoms and capture the energy they release, we have a power source. Even more impressively, this is the energy source that powers the entire universe, so we know that it works, and that it works reliably over cosmological scales of time. The problem is in scaling it down, i.e., doing fusion at a human scale. Perhaps that is the ultimate conceit: trying to do things at a human scale that work best on a non-human scale.

It is worth noting that frustration with our inability to generate more than break even energy from fusion (we can accomplish fusion on a human scale, but it takes us more energy to do so than we can derive from the fusion event) is another artifact of viewing things on a human scale. The fact that we know about fusion is a result of the wild success of science over the past couple of centuries. We didn’t always know so well what we were missing out on. When the industrial revolution was getting underway, no one was wringing their hands over our inability to rapidly build a global telecommunications network using electricity, because electricity was just too difficult to master.

Volta was already experimenting with electricity when James Watt built his steam engine. Clearly, the future was with steam power, its applications and its limitations, because steam power pulled ahead of electricity so rapidly and decisively. Where not the first trains and the first ships without oars or sails powered by steam rather than by static electricity? Would electricity have any future at all in a steam-powered world?

As it turned out, the nineteenth century largely belonged to steam power. It was not until the first decades of the twentieth century that manufacturing facilities were fully electrified, and despite this electrification it could be argued that the twentieth century belonged to the internal combustion engine as much as the nineteenth century belonged to the steam engine. Today we know what a steam-powered world would look like only through fiction – this is the “steampunk” world in which electricity never quite caught on.

It is a venerable trope of futurism and forecasting that, as hard as we try to understand what comes next, we tend only to magnify the world that we have today. We think of bigger trains or faster cars in the same way that Vikings imagined that Odin’s horse, Sleipnir, had eight legs, and so could run much faster than any four-legged horse. Steampunk was the magnification of the industrial revolution powered by steam. And while steam is still used today, it is used to turn dynamos in order to generate electricity.

If technological civilization continues in its development, we will certainly master fusion electrical generation, and the century or more it takes to do so will be something like the nineteenth century for electricity – a time of research, experimentation, the growth of scientific knowledge, and eventually a flowering in technological applications. Just because it takes a technology one or two centuries to get to market does not mean that that technology isn’t viable in the market, but our time horizons are human, all-too-human, and so the growth of fusion science and technology seems very slow, and it is an easy target for cynical types to ridicule. But we aren’t cynical; we understand that a technological and economic sea change is going to be a long time in the making.

All so it is, and so it will be, with space settlement. Everything that I have written above could be given a parallel formulation for space settlement. We have been experimenting with space science for seventy years, but,as important as space technologies have been in the second half of the twentieth century, they did not dominate. The twentieth century did not belong to space technology, although great initial milestones were achieved. Because of our familiarity with the rapid growth of science and technology in some sectors of our civilization, we expect a uniform advance of technology across all sectors of the economy, but it doesn’t work that way. And when things don’t work according to our expectations, people get cynical and it becomes fashionable to deny even the possibility that some science or technology will eventually come to maturity. Space settlement will be a devilishly difficult engineering challenge, but that does not mean it cannot be done.

It would have been very easy in the middle of the nineteenth century to dismiss electricity as a scientific curiosity, under development for decades without any of the promising practical applications that were then powering the industrial revolution. In hindsight we can understand the error, we just need to learn to also appreciate the error in foresight when we get caught up in the human, all-too-human scales of time that make us dismiss future possibilities in favor of linear extrapolation of the familiar.

Lattice Confinement Fusion

Lattice Confinement Fusion is a theoretical way of creating fusion inside a metal alloy doped with deuterium. No, it ain't cold fusion, not even close. And not just because the majority of scientist find the evidence for cold fusion to be about as convincing as data from the Flat Earth Society. Cold fusion features two electrodes in some heavy water, all quiet like. Lattice confinement fusion has an erbium-titanium alloy savagely bombarded with x-rays from an electron particle accelerator.

As a power source, it is probably more like a strong RTG than anything else.

LATTICE CONFINEMENT FUSION 1

NASA researchers demonstrate the ability to fuse atoms inside room-temperature metals

      Nuclear fusion is hard to do. It requires extremely high densities and pressures to force the nuclei of elements like hydrogen and helium to overcome their natural inclination to repel each other. On Earth, fusion experiments typically require large, expensive equipment to pull off.
     But researchers at NASA’s Glenn Research Center have now demonstrated a method of inducing nuclear fusion without building a massive stellarator or tokamak. In fact, all they needed was a bit of metal, some hydrogen, and an electron accelerator.
     The team believes that their method, called lattice confinement fusion, could be a potential new power source for deep space missions. They have published their results in two papers in Physical Review C.
     “Lattice confinement” refers to the lattice structure formed by the atoms making up a piece of solid metal. The NASA group used samples of erbium and titanium for their experiments. Under high pressure, a sample was “loaded” with deuterium gas (D + D fusion fuel), an isotope of hydrogen with one proton and one neutron. The metal confines the deuterium nuclei, called deuterons, until it’s time for fusion.
     “During the loading process, the metal lattice starts breaking apart in order to hold the deuterium gas,” says Theresa Benyo, an analytical physicist and nuclear diagnostics lead on the project. “The result is more like a powder.” At that point, the metal is ready for the next step: overcoming the mutual electrostatic repulsion between the positively-charged deuteron nuclei, the so-called Coulomb barrier.

     To overcome that barrier requires a sequence of particle collisions. First, an electron accelerator speeds up and slams electrons into a nearby target made of tungsten. The collision between beam and target creates high-energy photons, just like in a conventional X-ray machine. The photons are focused and directed into the deuteron-loaded erbium or titanium sample. When a photon hits a deuteron within the metal, it splits it apart into an energetic proton and neutron. Then the neutron collides with another deuteron, accelerating it.
     At the end of this process of collisions and interactions, you’re left with a deuteron that’s moving with enough energy to overcome the Coulomb barrier and fuse with another deuteron in the lattice.
     Key to this process is an effect called electron screening, or the shielding effect. Even with very energetic deuterons hurtling around, the Coulomb barrier can still be enough to prevent fusion. But the lattice helps again. “The electrons in the metal lattice form a screen around the stationary deuteron,” says Benyo. The electrons’ negative charge shields the energetic deuteron from the repulsive effects of the target deuteron’s positive charge until the nuclei are very close, maximizing the amount of energy that can be used to fuse.
     Aside from deuteron-deuteron fusion, the NASA group found evidence of what are known as Oppenheimer-Phillips stripping reactions. Sometimes, rather than fusing with another deuteron, the energetic deuteron would collide with one of lattice’s metal atoms, either creating an isotope or converting the atom to a new element. The team found that both fusion and stripping reactions produced useable energy.

     “What we did was not cold fusion,” says Lawrence Forsley, a senior lead experimental physicist for the project. Cold fusion, the idea that fusion can occur at relatively low energies in room-temperature materials, is viewed with skepticism by the vast majority of physicists. Forsley stresses this is hot fusion, but “We’ve come up with a new way of driving it.”
     “Lattice confinement fusion initially has lower temperatures and pressures” than something like a tokamak, says Benyo. But “where the actual deuteron-deuteron fusion takes place is in these very hot, energetic locations.” Benyo says that when she would handle samples after an experiment, they were very warm. That warmth is partially from the fusion, but the energetic photons initiating the process also contribute heat.
     There’s still plenty of research to be done by the NASA team. Now they’ve demonstrated nuclear fusion, the next step is to create reactions that are more efficient and more numerous. When two deuterons fuse, they create either a proton and tritium (a hydrogen atom with two neutrons), or helium-3 and a neutron. In the latter case, that extra neutron can start the process over again, allowing two more deuterons to fuse. The team plans to experiment with ways to coax more consistent and sustained reactions in the metal.
     Benyo says that the ultimate goal is still to be able to power a deep-space mission with lattice confinement fusion. Power, space, and weight are all at a premium on a spacecraft, and this method of fusion offers a potentially reliable source for craft operating in places where solar panels may not be useable, for example. And of course, what works in space could be used on Earth.

LATTICE CONFINEMENT FUSION 2

NASA Detects Lattice Confinement Fusion

A team of NASA researchers seeking a new energy source for deep-space exploration missions, recently revealed a method for triggering nuclear fusion in the space between the atoms of a metal solid.

Their research was published in two peer-reviewed papers in the top journal in the field, Physical Review C, Volume 101 (April, 2020): Nuclear fusion reactions in deuterated metals” and “Novel nuclear reactions observed in bremsstrahlung-irradiated deuterated metals.”

Nuclear fusion is a process that produces energy when two nuclei join to form a heavier nucleus. “Scientists are interested in fusion, because it could generate enormous amounts of energy without creating long-lasting radioactive byproducts,” said Theresa Benyo, Ph.D., of NASA’s Glenn Research Center. “However, conventional fusion reactions are difficult to achieve and sustain because they rely on temperatures so extreme to overcome the strong electrostatic repulsion between positively charged nuclei that the process has been impractical.”

Called Lattice Confinement Fusion, the method NASA revealed accomplishes fusion reactions with the fuel (deuterium, a widely available non-radioactive hydrogen isotope composed of a proton, neutron, and electron, and denoted “D”) confined in the space between the atoms of a metal solid. In previous fusion research such as inertial confinement fusion, fuel (such as deuterium/tritium) is compressed to extremely high levels but for only a short, nano-second period of time, when fusion can occur. In magnetic confinement fusion, the fuel is heated in a plasma to temperatures much higher than those at the center of the Sun. In the new method, conditions sufficient for fusion are created in the confines of the metal lattice that is held at ambient temperature. While the metal lattice, loaded with deuterium fuel, may initially appear to be at room temperature, the new method creates an energetic environment inside the lattice where individual atoms achieve equivalent fusion-level kinetic energies.

A metal such as erbium is “deuterated” or loaded with deuterium atoms, “deuterons,” packing the fuel a billion times denser than in magnetic confinement (tokamak) fusion reactors. In the new method, a neutron source “heats” or accelerates deuterons sufficiently such that when colliding with a neighboring deuteron it causes D-D fusion reactions. In the current experiments, the neutrons were created through photodissociation of deuterons via exposure to 2.9+MeV gamma (energetic X-ray) beam. Upon irradiation, some of the fuel deuterons dissociate resulting in both the needed energetic neutrons and protons. In addition to measuring fusion reaction neutrons, the Glenn Team also observed the production of even more energetic neutrons which is evidence of boosted fusion reactions or screened Oppenheimer-Phillips (O-P) nuclear stripping reactions with the metal lattice atoms. Either reaction opens a path to process scaling.


Illustration of the main elements of the lattice confinement fusion process observed

     In Part (A), a lattice of erbium is loaded with deuterium atoms (i.e., erbium deuteride), which exist here as deuterons. Upon irradiation with a photon beam, a deuteron dissociates, and the neutron and proton are ejected. The ejected neutron collides with another deuteron, accelerating it as an energetic “d*” as seen in (B) and (D). The “d*” induces either screened fusion (C) or screened Oppenheimer-Phillips (O-P) stripping reactions (E).

     In (C), the energetic “d*” collides with a static deuteron “d” in the lattice, and they fuse together. This fusion reaction releases either a neutron and helium-3 (shown) or a proton and tritium. These fusion products may also react in subsequent nuclear reactions, releasing more energy.

     In (E), a proton is stripped from an energetic “d*” and is captured by an erbium (Er) atom, which is then converted to a different element, thulium (Tm). If the neutron instead is captured by Er, a new isotope of Er is formed (not shown).

     More details are in this paper


A novel feature of the new process is the critical role played by metal lattice electrons whose negative charges help “screen” the positively charged deuterons. Such screening allows adjacent fuel nuclei to approach one another more closely, reducing the chance they simply scatter off one another, and increasing the likelihood that they tunnel through the electrostatic barrier promoting fusion. This is according to the theory developed by the project’s theoretical physicist, Vladimir Pines, Ph.D, of PineSci.

“The current findings open a new path for initiating fusion reactions for further study within the scientific community. However, the reaction rates need to be increased substantially to achieve appreciable power levels, which may be possible utilizing various reaction multiplication methods under consideration,” said Glenn’s Bruce Steinetz, Ph.D., the NASA project principal investigator.

“The key to this discovery has been the talented, multi-disciplinary team that NASA Glenn assembled to investigate temperature anomalies and material transmutations that had been observed with highly deuterated metals,” said Leonard Dudzinski, Chief Technologist for Planetary Science, who supported the research. “We will need that approach to solve significant engineering challenges before a practical application can be designed.”

With more study and development, future applications could include power systems for long-duration space exploration missions or in-space propulsion. It also could be used on Earth for electrical power or creating medical isotopes for nuclear medicine.

LATTICE CONFINEMENT FUSION PROBLEM 1

      But as my old mucker Craig Buckley (28 years experience in the Hydrogen Storage research field) pointed out to me all those years ago at Salford, even at high pressure there simply aren't that many deuterons per metal atom.

     So the deuterons, which are very small, are nowhere near one another and will reside in the interatomic voids. If you want them to fuse you have to do something to smash them together. They are not stuffed together in the alloy so can't fuse because there are 10-10 m apart

tweet by Paul M. Cray (2020)
LATTICE CONFINEMENT FUSION PROBLEM 2

      Any reaction that needs solid state materials is at best a fancy RTG. It’s not a Propulsion system power source for anything other than slow electric drives.

tweet by Adam Crowl (2020)

Exotic power sources

There are all sorts of exotic power sources. Some are reasonably theoretically possible, others are more fringe science. None of them currently exist, and some never will.

      There is nothing, never has been anything, quite like a busy spaceport on the outskirts of a capital city of a populous planet. There are the huge machines resting mightily in their cradles. If you choose your time properly, there is the impressive sight of the sinking giant dropping to rest or, more hair-raising still, the swiftening departure of a bubble of steel. All processes involved are nearly noiseless. The motive power is the silent surge of nucleons shifting into more compact arrangements.

From SECOND FOUNDATION by Isaac Asimov (1953)

Beamed Power

This is where the spacecraft receives its power not from an on-board generator but instead from a laser or maser beam sent from a remote space station. This is a popular option for spacecraft using propulsion systems that require lots of electricity but have low thrusts.

For instance, an ion drive has great specific impulse and exhaust velocity, but very low thrust. If the spacecraft has to power the ion drive using a heavy nuclear reactor with lead radiation shielding, the mass of the spacecraft will increase to the point where its acceleration could be beaten by a drugged snail. But with beamed power the power generator adds zero mass to the spacecraft, since the heavy generator is on the remote station instead of onboard and laser photons weigh nothing.

The drawback includes the distance decrease in power due to diffraction, and the fact that the spacecraft is at the mercy of whoever is running the remote power station. Also maneuvers must be carefully coordinated with the remote station, or they will have difficulty keeping the beam aimed at the ship.

The other drawback is the laser beam is also a strategic weapons-grade laser. The astromilitary (if any) take a very dim view of weapons-grade laser cannon in the hands of civilians. The beamed power equipment may be under the close (armed) supervision of the Laser Guard.

Antimatter Power

Any Star Trek fan knows that the Starship Enterprise runs on antimatter. The old term is "contra-terrene", "C-T", or "Seetee". At 100% of the matter-antimatter mass converted into energy, it would seem to be the ultimate power source. The operative word in this case is "seem".

What is not as well known is that unless the situation is non-standard, antimatter is not a fuel. It is an energy transport mechanism. Unless there exist "antimatter mines", antimatter is an energy transport mechanism, not a fuel. In Star Trek, I believe they found drifts of antimatter in deep space. An antimatter source was also featured in the Sten series. In real life, astronomers haven't seen many matter-antimatter explosions. Well, they've seen a few 511 keV gamma rays (the signature of electron-positron antimatter annihilation), but they've all been from thousands of light years away and most seem to be associated with large black holes. If they are antimatter mines, they are most inconveniently located. In Jack Williamson's novels Seetee Ship and Seetee Shock there exist commercially useful chunks of antimatter in the asteroid belt. However, if this was actually true, I think astronomers would have noticed all the antimatter explosions detonating in the belt by now.

And antimatter is a very inefficient energy transport mechanism. Current particle accelerators have an abysmal 0.000002% efficiency in converting electricity into antimatter (I don't care what you saw in the movie Angels and Demons). The late Dr. Robert Forward says this is because nuclear physicist are not engineers, an engineer might manage to increase the efficiency to something approaching 0.01% (one one-hundredth of one percent). Which is still pretty lousy, it means for every megawatt of electricity you pump in to the antimatter-maker you would only obtain enough antimatter to create a mere 100 pathetic watts. The theoretical maximum is 50% due to the pesky Law of Baryon Number Conservation (which demands that when turning energy into matter, equal amounts of matter and antimatter must be created).

In Charles Pellegrino and George Zebrowski novel The Killing Star they deal with this by having the Earth government plate the entire equatorial surface of the planet Mercury with solar power arrays, generating enough energy to produce a few kilograms of antimatter a year. They do this with von Neumann machines, of course.

Of course the other major draw-back is the difficulty of carrying the blasted stuff. If it comes into contact with the matter walls of the fuel tank the resulting explosion will make a nuclear detonation seem like a wet fire-cracker. Researchers are still working on a practical method of containment. In Michael McCollum's novel Thunder Strike! antimatter is transported in torus-shaped magnetic traps, it is used to alter the orbits of asteroids ("torus" is a fancy word for "donut").

Converting the energy from antimatter annihilation into electricity is also not very easy.

The electrons and positrons mutually annihilate into gamma rays. However, since an electron has 1/1836 the mass of a proton, and since matter usually contains about 2.5 protons or other nucleons for each electron, the energy contribution from electron-positron annihilation is negligible.

For every five proton-antiproton annihilations, two neutral pions are produced and three charged pions are produced (that is, 40% neutral pions and 60% charged pions). The neutral pions almost immediately decay into gamma rays. The charged pions (with about 94% the speed of light) will travel 21 meters before decaying into muons. The muons will then travel an additional two kilometers before decaying into electrons and positrons.

This means your power converter needs a component that will transform gamma rays into electricity, and a second component that has to attempt to extract the kinetic energy out of the charged pions and convert that into electricity. The bottom line is that there is no way you are going to get 100% of the annihilation energy converted into electricity. Exactly what percentage is likely achievable is a question above my pay grade.

The main virtue of antimatter power is that it is incredibly concentrated, which drastically reduces the mass of antimatter fuel required for a given application. And mass is always a problem in spacecraft design, so any way of reducing it is welcome.

The man known as magic9mushroom drew my attention to the fact that Dr. James Bickford has identified a sort of antimatter mine where antimatter can be collected by magnetic scoops (be sure to read the comment section), but the amounts are exceedingly small. He foresees using tiny amounts of antimatter for applications such as catalyzing sub-critical nuclear reactions, instead of just using raw antimatter for fuel. His report is here.

Dr. Bickford noted that high-energy galactic cosmic rays (GCR) create antimatter via "pair production" when they impact the upper atmospheres of planets or the interstellar medium. Planets with strong magnetic fields enhance antimatter production. One would think that Jupiter would be the best at producing antimatter, but alas its field is so strong that it prevents GCR from impacting the Jovian atmosphere at all. As it turns out, the planet with the most intense antimatter belt is Earth, while the planet with the most total antimatter in their belt is Saturn (mostly due to the rings). Saturn receives almost 250 micrograms of antimatter a year from the ring system. Please note that this is a renewable resource.

Dr. Bickford calculates that the plasma magnet scoop can collect antimatter about five orders of magnitude more cost effective than generating the stuff with particle accelerators.

Keep in mind that the quantities are very small. Around Earth the described system will collect about 25 nanograms per day, and can store up to 110 nanograms. That has about the same energy content as half a fluid ounce of gasoline, which ain't much. However, such tiny amounts of antimatter can catalyze tremendous amounts of energy from sub-critical fissionable fuel, which would give you the power of nuclear fission without requiring an entire wastefully massive nuclear reactor. Alternatively, one can harness the power of nuclear fusion with Antimatter-Catalyzed Micro-Fission/Fusion or Antimatter-Initiated Microfusion. Dr. Bickford describes a mission where an unmanned probe orbits Earth long enough to gather enough antimatter to travel to Saturn. There it can gather a larger amount of antimatter, and embark on a probe mission to the outer planets.

Vacuum energy

Vacuum energy or zero-point energy is one of those pie-in-the-sky concepts that sounds too good to be true, and is based on the weirdness of quantum mechanics. The zero-point energy is the lowest energy state of any quantum mechanical system, but because quantum systems are fond of being deliberately annoying their actual energy level fluctuates above the zero-point. Vacuum energy is the zero-point energy of all the fields of space.

Naturally quite a few people wondered if there was a way to harvest all this free energy.

Currently the only suggested method was proposed by the late Dr. Robert Forward, the science fiction writer's friend (hard-SF writers would do well to pick up a copy of Forward's Indistinguishable From Magic). His paper is Extracting Electrical Energy From the Vacuum by Cohesion of Charged Foliated Conductors, and can be read here.

How much energy are we talking about? Nobody knows. Estimates based on the upper limit of the cosmological constant put it at a pathetic 10-9 joules per cubic meter (about 1/10th the energy of a single cosmic-ray photon). On the other tentacle estimates based on Lorentz covariance and with the magnitude of the Planck constant put it at a jaw-dropping 10113 joules per cubic meter (about 3 quintillion-septillion times more energy than the Big Bang). A range between 10-9 and 10113 is another way of saying "nobody knows, especially if they tell you they know".

Vacuum energy was used in All the Colors of the Vacuum by Charles Sheffield, Encounter with Tiber by Buzz Aldrin John Barnes, and The Songs of Distant Earth by Sir Arthur C. Clarke.

Arguably the Grand Unified Theory (GUT) drives and GUTships in Stephen Baxter's Xeelee novels are also a species of vacuum energy power sources.

CASIMIR BATTERIES AND ENGINES

Casimir batteries and engines

A common assumption is that the Casimir force is of little practical use; the argument is made that the only way to actually gain energy from the two plates is to allow them to come together (getting them apart again would then require more energy), and therefore it is a one-use-only tiny force in nature. In 1984 Robert Forward published work showing how a "vacuum-fluctuation battery" could be constructed. The battery can be recharged by making the electrical forces slightly stronger than the Casimir force to reexpand the plates. (so it is more of an advanced capacitor or rechargable battery than it is a power source)

In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS.

In 1999 Pinto, a former scientist at NASA's Jet Propulsion Laboratory at Caltech in Pasadena, published in Physical Review his Gedankenexperiment for a "Casimir engine". The paper showed that continuous positive net exchange of energy from the Casimir effect was possible, even stating in the abstract "In the event of no other alternative explanations, one should conclude that major technological advances in the area of endless, by-product free-energy production could be achieved." In 2001 Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system’s behaviour agreed well with theoretical calculations.

Despite this and several similar peer reviewed papers, there is not a consensus as to whether such devices can produce a continuous output of work. Garret Moddel at University of Colorado has highlighted that he believes such devices hinge on the assumption that the Casimir force is a nonconservative force, he argues that there is sufficient evidence (e.g. analysis by Scandurra (2001)) to say that the Casimir effect is a conservative force and therefore even though such an engine can exploit the Casimir force for useful work it cannot produce more output energy then has been input into the system.

In 2008 DARPA solicited research proposals in the area of Casimir Effect Enhancement (CEE). The goal of the program is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir Force.

A 2008 patent by Haisch and Moddel details a device that is able to extract power from zero-point fluctuations using a gas that circulates through a Casimir cavity. As gas atoms circulate around the system they enter the cavity. Upon entering the electrons spin down to release energy via electromagnetic radiation. This radiation is then extracted by an absorber. On exiting the cavity the ambient vacuum fluctuations (i.e. the zero-point field) impart energy on the electrons to return the orbitals to previous energy levels, as predicted by Senitzky (1960). The gas then goes through a pump and flows through the system again. A published test of this concept by Moddel was performed in 2012 and seemed to give excess energy that could not be attributed to another source. However it has not been conclusively shown to be from zero-point energy and the theory requires further investigation.

(ed note: see original article for links to references)

From the Wikipedia entry for ZERO-POINT ENERGY
VACUUM ENERGY 1

8.19 The vacuum energy drive

     The most powerful theories in physics today are quantum theory and the theories of special and general relativity. Unfortunately, those theories are not totally consistent with each other. If we calculate the energy associated with an absence of matter—the "vacuum state"—we do not, as common sense would suggest, get zero. Instead, quantum theory assigns a specific energy value to a vacuum.

     In classical thinking, one could argue that the zero point of energy is arbitrary, so we could simply start measuring energies from the vacuum energy value. However, if we accept general relativity that option is denied to us. Energy, of any form, produces spacetime curvature, and we are therefore not allowed to redefine the origin of the energy scale. Once this is accepted, the energy of the vacuum cannot be talked out of existence. It is real, and when we calculate it we get a large positive value per unit volume.

     How large?

     Richard Feynman addressed the question of the vacuum energy value and computed an estimate for the equivalent mass per unit volume. The estimate came out as two billion tons per cubic centimeter. The energy in two billion tons of matter is more than enough to boil all Earth's oceans.

     Is there any possibility that the vacuum energy could be tapped for useful purposes? Robert Forward has proposed a mechanism, based upon a real physical phenomenon known as the Casimir Effect. I think it would work, but the energy produced is small. The well-publicized mechanisms of others, such as Harold Puthoff, for extracting vacuum energy leave me totally unpersuaded.

     Science fiction that admits it is science fiction is another matter. According to Arthur Clarke, I was the first person to employ the idea of the vacuum energy drive in fictional form, in the story "All the Colors of the Vacuum" (Sheffield, 1981). Clarke employed one in The Songs of Distant Earth (Clarke, 1986). Not surprisingly, there was a certain amount of hand-waving on both Clarke's part and mine as to how the vacuum energy drive was implemented. If the ship can obtain energy from the vacuum, and mass and energy are equivalent, why can't the ship get the reaction mass, too? How does the ship avoid being slowed when it takes on energy, which has an equivalent mass that is presumably at rest? If the vacuum energy is the energy of the ground state, to what new state does the vacuum go, after energy is extracted?

     Good questions. Look on them as an opportunity. There must be good science-fictional answers to go with them.

From BORDERLANDS OF SCIENCE by Charles Sheffield (1999)
VACUUM ENERGY 2

      McAndrew laughed, a humorless bark. "I'll tell you why, Jeanie. You flew the Merganser. Tell me how the drive worked."

     "Well, the mass plate at the front balanced the acceleration, so we didn't get any sensation of fifty gee." I shrugged. "I didn't work out the math for myself, but I'm sure I could have if I felt like it."

     I could have, too. I was a bit rusty, but you never lose the basics once you have them planted deep enough in your head.

     "I don't mean the balancing mechanism, that was just common sense." He shook his head. "I mean the drive. Didn't it occur to you that we were accelerating a mass of trillions of tons at fifty gee? If you work out the mass conversion rate you will need, you find that even with an ideal photon drive you'll consume the whole mass in a few days. The Merganser got its drive by accelerating charged particles up to within millimeters a second of light speed. That was the reaction mass. But how did it get the energy to do it?"

     I felt like telling him that when I had been on Merganser there had been other details—such as survival—on my mind. I thought for a few moments, then shook my head.

     "You can't get more energy out of matter than the rest mass energy, I know that. But you're telling me that the drives on Merganser and Hoatzin do it. That Einstein was wrong."

     "No!" McAndrew looked horrified at the thought that he might have been criticizing one of his senior idols. "All I've done is build on what Einstein did. Look, you've done a fair amount of quantum mechanics. You know that when you calculate the energy for the vacuum state of a system you don't get zero. You get a positive value."

     I had a hazy recollection of a formula swimming back across the years. What was it? h/4πw, said a distant voice.

     "But you can set that to zero!" I was proud at remembering so much. "The zero point of energy is arbitrary."

     "In quantum theory it is. But not in general relativity." McAndrew was beating back my mental defenses. As usual when I spoke with him on theoretical subjects, I began to feel I would know less at the end of the conversation than I did at the beginning.

     "In general relativity," he went on, "energy implies space-time curvature. If the zero-point energy is not zero, the vacuum self-energy is real. It can be tapped, if you know what you are doing. That's where Hoatzin draws its energy. The reaction mass it needs is very small. You can get that by scooping up matter as you go along, or if you prefer it you can use a fraction—a very small fraction—of the mass plate."

From ALL THE COLORS OF THE VACUUM by Charles Sheffield (1981)
VACUUM ENERGY 3

The first suggestion that vacuum energies might be used for propulsion appears to have been made by Shinichi Seike in 1969. (‘Quantum electric space vehicle’; 8th Symposium on Space Technology and Science, Tokyo.)

Ten years later, H. D. Froning of McDonnell Douglas Astronautics introduced the idea at the British Interplanetary Society’s Interstellar Studies Conference, London (September 1979) and followed it up with two papers: ‘Propulsion Requirements for a Quantum Interstellar Ramjet’ (JBIS, Vol. 33,1980) and ‘Investigation of a Quantum Ramjet for Interstellar Flight’ (AIAA Preprint 81-1534, 1981).

Ignoring the countless inventors of unspecified ‘space drives,’ the first person to use the idea in fiction appears to have been Dr Charles Sheffield, Chief Scientist of Earth Satellite Corporation; he discusses the theoretical basis of the ‘quantum drive’ (or, as he has named it, ‘vacuum energy drive’) in his novel The McAndrew Chronicles (Analog magazine 1981; Tor, 1983).

An admittedly naive calculation by Richard Feynman suggests that every cubic centimetre of vacuum contains enough energy to boil all the oceans of Earth. Another estimate by John Wheeler gives a value a mere seventy-nine orders of magnitude larger. When two of the world’s greatest physicists disagree by a little matter of seventy-nine zeros, the rest of us may be excused a certain scepticism; but it’s at least an interesting thought that the vacuum inside an ordinary light bulb contains enough energy to destroy the galaxy … and perhaps, with a little extra effort, the cosmos.

In what may hopefully be an historic paper (‘Extracting electrical energy from the vacuum by cohesion of charged foliated conductors,’ Physical Review, Vol. 30B, pp. 1700-1702, 15 August 1984) Dr Robert L. Forward of the Hughes Research Labs has shown that at least a minute fraction of this energy can be tapped. If it can be harnessed for propulsion by anyone besides science-fiction writers, the purely engineering problems of interstellar — or even intergalactic — flight would be solved.

From THE SONGS OF DISTANT EARTH by Sir Arthur C. Clarke (1985)

Primordial Black Holes

In 1974 Stephen Hawking discovered that black holes are not black.

ARTIFICIAL SINGULARITY POWER
Primordial black holes
R(am)M(Mt)kT(GeV)P(PW)P/c2(g/sec)L(yrs)
0.160.10898.1551961400≲0.04
0.30.20252.3152717000≲0.12
0.60.40426.236740901
0.90.60617.416017803.5
1.00.67315.712914305
1.51.0110.556.262616—17
2.01.357.8531.334839—41
2.51.686.2819.822175—80
2.61.756.0418.320485—91
2.71.825.8216.918995—102
2.81.895.6115.7175106—114
2.91.955.4114.6163118—127
3.02.025.2313.7152130—140
5.83.912.713.5038.9941—1060
5.93.972.663.3737.5991—1117
6.04.042.623.2636.21042—1177
6.94.652.282.4327.11585—1814
7.04.712.242.3626.21655—1897
10.06.731.571.1112.34824—5763

Abstract

Artificial Singularity Power (ASP) engines generate energy through the evaporation of modest sized (108-1011 kg) black holes created through artificial means. This paper discusses the design and potential advantages of such systems for powering large space colonies, terraforming planets, and propelling starships. The possibility of detecting advanced extraterrestrial civilizations via the optical signature of ASP systems is examined. Speculation as to possible cosmological consequences of widespread employment of ASP engines is considered.

Introduction

According to a theory advanced by Stephen Hawking [1] in 1974, black holes evaporate at a rate given by:

tev = (5120π)tP(m/mP)3 (1)

where tev is the time it takes for the black hole to evaporate, tP is the Planck time (5.39e-44 s), m is the mass of the black hole in kilograms, and mP is the Planck mass (2.18e-8 kg) [2]

Hawking considered the case of black holes formed by the collapse of stars, which need to be at least ~3 solar masses to occur naturally. For such a black hole, equation 1 yields an evaporation time of 5e68 years, far longer than the expected life of the universe. In fact, evaporation would never happen, because the black hole would gain energy, and thus mass, by drawing in cosmic background radiation at a rate faster than its own insignificant rate of radiated power.

However it can be seen from examining equation (1) that the evaporation rate goes inversely with the cube of singularity, which means that the emitted power (=mc2/tev) goes inverse with the square of the mass. Thus if the singularity could be made small enough, very large amounts of power could theoretically be produced.

This possibility was quickly grasped by science fiction writers, and such propulsion systems were included by Arthur C. Clarke in his 1976 novel Imperial Earth [3] and Charles Sheffield in his 1978 short story “Killing Vector.” [4]

Such systems did not receive serious technical analysis however, until 2009, when it was examined by Louis Crane and Shawn Westmoreland, both then of Kansas State University, in their seminal paper “Are Black Hole Starships Possible?” [5]

In their paper, Crane and Westmoreland focused on the idea of using small artificial black holes powerful enough to drive a starship to interstellar-class velocities yet long-lived enough to last the voyage. They identified a “sweet spot” for such “Black Hole Starships” (BHS) with masses on the order of 2×109 kg, which they said would have lifetimes on order of 130 years, yet yield power of about 13,700 TW. They proposed to use some kind of parabolic reflector to reflect this radiation, resulting in a photon rocket. The ideal thrust T of a rocket with jet power P and exhaust velocity v is given by:

T = 2P/v (2)

So with T = 13,700 TW and v=c = 3e8 m/s, the thrust would be 8.6e7 N. Assuming that the payload spacecraft had a mass of 1e9 kg, this would accelerate the ship at a rate of a=8.6e7/3e9 = 2.8e-2 m/s2. Accelerating at this rate, such a ship would reach about 30% the speed of light in 100 years.

There are a number of problems with this scheme. In the first place, the claimed acceleration is on the low side. Furthermore their math appears to be incorrect. A 2e9 kg singularity would only generate about 270 TW, or 1/50th as much as their estimate, reducing thrust by a factor of 50 (although it would last about 20,000 years). These problems could be readily remedied, however, by using a smaller singularity and a smaller ship. For example a singularity with a mass of 2e8 kg would produce a power of 26,900 TW. Assuming a ship with a mass of 1e8 kg, an acceleration of 0.6 m/s2 could be achieved, allowing 60% the speed of light to be achieved in 10 years. The singularity would only have a lifetime of 21 years. However it could be maintained by being constantly fed mass at a rate of about 0.33 kg/s.

A bigger problem is that a 1e9 kg singularity would produce radiation with a characteristic temperature of 9 GeV, increasing in inverse proportion to the singularity mass. So for example a 1e8 kg singularity would produce gamma rays with energies of 90 GeV (i.e. for Temperature, T, in electron volts, T = 9e18/m.) There is no known way to reflect such high energy photons. So at this point the parabolic reflector required for the black hole starship photon engine is science fiction.

Yet another problem is the manufacture of the black hole. Crane and Westmoreland suggest that it could be done using converging gamma ray lasers. To make a 1e9 kg unit, they suggested a “high-efficiency square solar panel a few hundred km on each side, in a circular orbit about the sun at a distance of 1,000,000 km” to provide the necessary energy. A rough calculation indicates the implied power of this system from this specification is on the order of 106 TW, or about 100,000 times the current rate used by human civilization. As an alternative construction technique, they also suggest accelerating large masses to relativistic velocities and then colliding them. The density of these masses would be multiplied both by relativistic mass increase and length contraction. However the energy required to do this would still equal the combined masses times the speed of light squared. While this technique would eliminate the need for giant gamma ray lasers, the same huge power requirement would still present itself.

In what follows, we will examine possible solutions for the above identified problems.

Advanced Singularity Engines

In MKS units, equation (1) can be rewritten as:

tev = 8.37e-15 m3 (3)

This implies that the power, P, in Watts, emitted by the singularity is given by:

P = 1.08e33/m2 (4)

The results of these two equations are shown in Fig. 1.

No credible concept is available to enable a lightweight parabolic reflector of the sort needed to enable the Black Hole Starship. But we can propose a powerful and potentially very useful system by dropping the requirement for starship-relevant thrust to weight ratios. Instead let us consider the use of ASP engines to create an artificial sun.

Consider a 1e8 kg ASP engine. As shown in Fig 1, it would produce a power of 1.08e8 Gigawatts. Such an engine, if left along, would only have a lifetime of 2.65 years, but it could be maintained by a constant feed of about 3 kg/s of mass. We can’t reflect its radiation, but we can absorb it with a sufficiently thick material screen. So let’s surround it with a spherical shell of graphite with a radius of 40 km and a thickness of 1.5 m. At a distance of 40 km, the intensity of the radiation will be about 5 MW/m2, which the graphite sphere can radiate into space with a black body temperature of 3000 K. This is about the same temperature as the surface of a type M red dwarf star. We estimate that graphite has an attenuation length for high energy gamma rays of about 15 cm, so that 1.5 m of graphite (equivalent shielding to 5 m of water or half the Earth’s atmosphere) will attenuate the gamma radiation by ten factors of e, or 20,000. The light will then radiate out further, dropping in intensity with the square of the distance, reaching typical Earth sunlight intensities of 1 kW/m2 at a distance of about 3000 km from the center.

The mass of the artificial star will be about 1014 kg (that’s the mass of the graphite shell, compared to which the singularity is insignificant.). As large as this is, however, it is still tiny compared to that of a planet, or even the Earth’s Moon (which is 7.35e22 kg). So, no planet would orbit such a little star. Instead, if we wanted to terraform a cold world, we would put the mini-star in orbit around it.

The preferable orbital altitude of the ASP mini-star of 3000 km altitude in the above cited example was dictated by the power level of the singularity. Such a unit would be sufficient to provide all the light and heat necessary to terraform an otherwise sunless planet the size of Mars. Lower power units incorporating larger singularities but much smaller graphite shells are also feasible. (Shell mass is proportional to system power.) These are illustrated in Table 1.

The high-powered units listed in Table 1 with singularity masses in the 1e8 to 1e9 kg range are suitable to serve as mini-suns orbiting planets, moons or asteroids, with the characteristic radius of such terraforming candidates being about the same as the indicated orbital altitude. The larger units, with lower power and singularity masses above 1e10 kg are more appropriate for space colonies.

Consider an ASP mini-sun with a singularity mass of 3.16e10 kg positioned in the center of a cylinder with a radius of 10 km and a length of 20 km. The cylinder is rotating at a rate of 0.0316 radians per second, which provides it with 1 g or artificial gravity. Let’s say the cylinder is made of material with an areal density of 1000 kg per square meter. In this case it will experience an outward pressure of 104 pascals, or about 1.47 psi, due to outward acceleration. If the cylinder were made of solid Kevlar (density = 1000 kg/m3) it would be about 1 m thick. So the hoop stress on it would be 1.47*(10,000)/1 = 14,700 psi, which is less than a tenth the yield stress of Kevlar. Or put another way, 10 cm of Kevlar would do the job of carrying the hoop stress, and the rest of mass load could be anything, including habitations. If the whole interior of the cylinder were covered with photovoltaic panels with an efficiency of 10 percent, 100 GWe of power would be available for use of the inhabitants of the space colony, which would have an area of 1,256 square kilometers. The mini-sun powering it would have a lifetime of 84 million years, without refueling. Much larger space colonies (i.e, with radii over ~100 km) would not be possible however, unless stronger materials become available, as the hoop stress would become too great.

Both of these approaches seem potentially viable in principle. However we note that the space colony approach cited requires a singularity some 300 times more massive than the approach of putting a 1e8 kg mini-sun in orbit around a planet, which yields 4π(3000)2 = 100 million square kilometers of habitable area, or about 80,000 times as much land. Furthermore, the planet comes with vast supplies of matter of every type, whereas the space colony needs to import everything.

Building Singularities

Reducing the size of the required singularity by a factor of 10 from 1e9 to 1e8 kg improves feasibility of the ASP concept somewhat, but we need to do much better. Fortunately there is a way to do so.

If we examine equation (3), we can see that the expected lifetime of a 1000 kg singularity would be about 8.37 x 10-6 s. In this amount of time, light can travel about 250 m. and an object traveling at half the speed of light 125 m. If a sphere with a radius of 125 m were filled with steel it would contain about 8 x 1010 kg, or about 100 times what we need for our 1e8 kg ASP singularity. In fact, it turns out that if the initial singularity is as small as about 200 kg, and fired into a mass of steel, it will gain mass much faster than it losses it, and eventually grow into a singularity as massive as the steel provided.

By using this technique we can reduce the amount of energy required to form the required singularity by about 7 orders of magnitude compared to Crane and Westmoreland’s estimate. So instead of needing a 106 TW system, a 100 GW gamma ray laser array might do the trick. Alternatively, accelerating two 200 kg masses to near light speed would require 3.6e7 TJ, or 10,000 TW-hours of energy. This is about the energy humanity currently uses in 20 days. We still don’t know how to do it, but reducing the scale of the required operation by a factor of 10 million certainly helps.

ASP Starships

We now return to the subject of ASP starships. In the absence of a gamma ray reflector, we are left with using solid material to absorb the gamma rays and other energetic particles and re-radiate their energy as heat. (Using magnetic fields to try to contain and reflect GeV-class charged particles that form a portion of the Hawking radiation won’t work because the required fields would be too strong and too extensive, and the magnets to generate them would be exposed to massive heating by gamma radiation.)

Fortunately, we don’t need to absorb all the radiation in the absorber/reflector, we only need to absorb enough to get it hot. So let’s say that we position a graphite hemispherical screen to one side of a 1e8 kg ASP singularity, but instead of making it 1.5 m thick, we make it 0.75 mm thick. At that thickness it will only absorb about 5 percent of the radiation that hits it, the rest will pass right through. So we have 5e6 GW of useful energy, which we want to reduce to 5 MW/m2 in order for the graphite to be kept at ~3000 K where it can survive. The radius will be about 9 km, and the mass of the graphite hemisphere will be about 6e8 kg. A thin solar sail like parabolic reflector with an area 50 times as great and the carbon hemisphere but a thickness 1/500th (i.e. 1.5 microns) as great would be positioned in front of the hemisphere, adding another 0.6 e8 kg to the system, which then plus the singularity and the 1e8 kg ship might be 7.6e8 kg in all. Thrust will be 0.67e8 N, so the ship would accelerate at a speed of 0.67/7.6 = 0.09 m/s2, allowing it to reach 10 percent the speed of light in about 11 years.

Going much faster would become increasingly difficult, because using only 5% of the energy of the singularity mass would give the system an effective exhaust velocity of about 0.22 c. Higher efficiencies might be possible if a significant fraction of the Hawking radiation came off as charged particles, allowing a thin thermal screen to capture a larger fraction of the total available energy. In this case, effective exhaust velocity would go as c times the square root of the achieved energy efficiency. But sticking with our 5% efficiency, if we wanted to reach 0.22 c we could, but we would require a mass ratio of 2.7, meaning we would need about 1.5e9 kg of propellant to feed into the ASP engine, whose mass would decrease our average acceleration by about a factor of two over the burn, meaning we would take about 40 years to reach 20 percent the speed of light.

Detecting ET

The above analysis suggests that if ASP technology is possible, using it to terraform cold planets with orbital mini-suns will be the preferred approach. Orbiting (possibly isolated) cold worlds at distances of thousands of kilometers, and possessing 3000 K type M red dwarf star spectra, potentially with gamma radiation in excess of normal stellar expectations, it is possible that such objects could be detectable.

Indeed, one of the primary reasons to speculate on the design of ASP engines right now is to try to identify their likely signature. We are far away from being able to build such things. But the human race is only a few hundred thousand years old, and human civilization is just a few thousand years. In 1905 the revolutionary HMS Dreadnought was launched, displacing 18,000 tons. Today ships 5 times that size are common. So it is hardly unthinkable that in a century or two we will have spacecraft in the million ton (109 kg) class. Advanced extraterrestrial civilizations may have reached our current technological level millions or even billions of years ago. So they have had plenty of time to develop every conceivable technology. If we can think it, they can build it, and if doing so would offer them major advantages, they probably have. Thus, looking for large energetic artifacts such as Dyson Spheres [6], starships [7,8], or terraformed planets [9] is potentially a promising way to carry out the SETI search, as unlike radio SETI, it requires no mutual understanding of communication conventions. Given the capabilities the ASP technology would offer any species seeking to expand it prospects by illuminating and terraforming numerous new worlds, such systems may actually be quite common.

ASP starships are also feasible and might be detectable as well. However the durations of starship flights would be measured in decades or centuries, while terraformed worlds could be perpetual. Furthermore, once settled, trade between solar systems could much more readily be accomplished by the exchange of intellectual property via radio than by physical transport. As a result, the amount of flight traffic will be limited. In addition, there could be opportunities for employment of many ASP terraforming engines within a single solar system. For example, within our own solar system there are seven worlds of planetary size (Mars, Ceres, Ganymede, Calisto, Titan, Triton, and Pluto) whose terraforming could be enhanced or enabled by ASP systems, not to mention hundreds of smaller but still considerable moons and asteroids, and potentially thousands of artificial space colonies as well. Therefore the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. It would therefore appear advantageous to focus the ASP SETI search effort on such systems.

Proxima Centauri is a type M red dwarf with a surface temperature of 3000 K. It therefore has a black body spectrum similar to that of the 3000 K graphite shell of our proposed ASP mini-sun discussed above. The difference however is that it has about 1 million times the power, so that an ASP engine placed 4.2 light years (Proxima Centauri’s distance) from Earth would have the visual brightness as a star like Proxima Centauri positioned 4,200 light years away. Put another way, Proxima Centauri has a visual magnitude of 11. It takes 5 magnitudes to equal a 100 fold drop in power, so our ASP engine would have a visual magnitude of 26 at 4.2 light years, and magnitude 31 at 42 light years. The limit of optical detection of the Hubble Space Telescope is magnitude 31. So HST would be able to see our proposed ASP engine out to a distance of about 50 light years, within which there are some 1,500 stellar systems.

Consequently ASP engines may already have been imaged by Hubble, appearing on photographs as unremarkable dim objects assumed to be far away. These should be subjected to study to see if any of them exhibit parallax. If they do, this would show that they are actually nearby objects of much lower power than stars. Further evidence of artificial origin could be provided if they were found to exhibit a periodic Doppler shift, as would occur if they were in orbit around a planetary body. An anomalous gamma ray signature could be present as well.

I suggest we have a look.

Cosmological Implications

One of the great mysteries of science is why the laws of the universe are so friendly to life. Indeed, it can be readily shown that if any one of most of the twenty or so apparently arbitrary fundamental constants of nature differed from their actual value by even a small amount, life would be impossible [9]. Some have attempted to answer this conundrum by claiming that there is nothing to be explained because there are an infinite number of universes; we just happen to live in the odd one where life is possible. This multiverse theory answer is absurd, as it could just as well be used to avoid explaining anything. For example take the questions, why did the Titanic sink/it snow heavily last winter/the sun rise this morning/the moon form/the chicken cross the road? These can all also be answered by saying “no reason, it other universes they didn’t.” The Anthropic Principle reply, to the effect of “clearly they had to, or you wouldn’t be asking the question” is equally useless.

Clearly a better explanation is required. One attempt at such an actual causal theory was put forth circa 1992 by physicist Lee Smolin [10], who says that daughter universes are formed by black holes created within mother universes. This has a ring of truth to it, because a universe, like a black hole, is something that you can’t leave. Well, says Smolin, in that case, since black holes are formed from collapsed stars, the universes that have the most stars will have the most progeny. So to have progeny a universe must have physical laws that allow for the creation of stars. This would narrow the permissible range of the fundamental constants by quite a bit. Furthermore, let’s say that daughter universes have physical laws that are close to, but slightly varied from that of their mother universes. In that case, a kind of statistical natural selection would occur, overwhelmingly favoring the prevalence of star-friendly physical laws as one generation of universes follows another.

But the laws of the universe don’t merely favor stars, they favor life, which certainly requires stars, but also planets, water, organic and redox chemistry, and a whole lot more. Smolin’s theory gets us physical laws friendly to stars. How do we get to life?

Reviewing an early draft of Smolin’s book in 1994, Crane offered the suggestion [11] that if advanced civilizations make black holes, they also make universes, and therefore universes that create advanced civilizations would have much more progeny than those that merely make stars. Thus the black hole origin theory would explain why the laws of the universe are not only friendly to life, but the development of intelligence and advanced technology as well. Universes creates life because life creates universes. This result is consistent with complexity theory, which holds that if A is necessary to B, then B has a role in causing A.

These are very interesting speculations. So let us ask, what would we see if our universe was created as a Smolin black hole, and how might we differentiate between a natural star collapse or ASP engine origin? From the above discussion, it should be clear that if someone created an ASP engine, it would be advantageous for them to initially create a small singularity, then grow it to its design size by adding mass at a faster rate than it evaporates, and then, once it reaches its design size, maintain it by continuing to add mass at a constant rate equal to the evaporation rate. In contrast, if it were formed via the natural collapse of a star it would start out with a given amount of mass that would remain fixed thereafter.

So let’s say our universe is, as Smolin says, a black hole. Available astronomical observations show that it is expanding, at a velocity that appears to be close to the speed of light. Certainly the observable universe is expanding at the speed of light.

Now a black hole has an escape velocity equal to the speed of light. So for such a universe

c2/2 = GM/R (5)

Where G is the universal gravitational constant, c is the speed of light in vacuum, M is the mass of the universe, and R is the radius of the universe.

If we assume that G and c are constant, R is expanding at the speed of light, and τ is the age of the universe, then:

R = cτ (6)

Combining (5) and (6), we have.

M/τ = (Rc2/2G)(c/R) = c3/2G (7)

This implies that the mass of such a universe would be growing at a constant rate. Contrary to the classic Hoyle continuous creation theory, however, which postulated that mass creation would lead to a steady state universe featuring constant density for all eternity, this universe would have a big bang event with density decreasing afterwards inversely with the square of time.

Now the Planck mass, mp, is given by:

mp = (hc/2πG)½ (8)

And the Planck time, tp, is given by:

tp = (hG/2πc5)½ (9)

If we divide equation (8) by equation (9) we find:

mp/tp = c3/G (10)

If we compare equation (10) to equation (7) we see that:

M/τ = ½(mp/tp) (11)

So the rate at which the mass of such a universe would increase equals exactly ½ Planck mass per Planck time.

Comparison with Observational Astronomy

In MKS units, G = 6.674e-11, c= 3e+8, so:

M/τ= c3/2G = 2.02277 e+35 kg/s. (12)

For comparison, the mass of the Sun is 1.989+30 kg. So this is saying that the mass of the universe would be increasing at a rate of about 100,000 Suns per second.

Our universe is believed to be about 13 billion years, or 4e+17 seconds old. The Milky Way galaxy has a mass of about 1 trillion Suns. So this is saying that the mass of the universe should be about 40 billion Milky Way galaxies. Astronomers estimate that there are 100 to 200 billion galaxies, but most are smaller than the Milky Way. So this number is in general agreement with what we see.

According to this estimate, the total mass of the universe M, is given by:

M = (2e+35)(4e+17) = 8e+52 kg. (13)

This number is well known. It is the critical mass required to make our universe “flat.” It should be clear, however, that when the universe was half as old, with half its current diameter, this number would have needed to be half as great. Therefore, if the criteria is that such a universe mass always be critical for flatness, and not just critical right now, then its mass must be increasing linearly with time.

These are very curious results. Black holes, the expanding universe, and the constancy of the speed of light are results of relativity theory. Planck masses and Planck times relate to quantum mechanics. Observational astronomy provides data from telescopes. It is striking that these three separate approaches to knowledge should provide convergent results.

This analysis does require that mass be continually added to the universe at a constant rate, exactly as would occur in the case of an ASP engine during steady-state operation. It differs however in that in an ASP engine, the total mass only increases during the singularity’s buildup period. During steady state operation mass addition would be balanced by mass evaporation. How these processes would appear to the inhabitants of an ASP universe is unclear. Also unclear is how the inhabitants of any Smolinian black hole universe could perceive it as rapidly expanding. Perhaps the distance, mass, time, and other metrics inside a black hole universe could be very different from those of its parent universe, allowing it to appear vast and expanding to its inhabitants while looking small and finite to outside observers. One possibility is that space inside a black hole is transformed, in a three dimensional manner analogous to a ω = 1/z transformation in the complex plane, so that the point at the center becomes a sphere at infinity. In this case mass coming into the singularity universe from its perimeter would appear to the singularity’s inhabitants as matter/energy radiating outward from its center.

Is there a model that can reconcile all the observations of modern astronomy with those that would be obtained by observers inside either a natural black hole or ASP universe? Speculation on this matter by scientists and science fiction writers with the required physics background would be welcome [13].

Conclusions

We find that ASP engines appear to be theoretically possible, and could offer great benefits to advanced spacefaring civilizations. Particularly interesting is their potential use as artificial suns to enable terraforming of unlimited numbers of cold worlds. ASP engines could also be used to enable interstellar colonization missions. However the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. Such engines would have optical signatures similar to M-dwarfs, but would differ in that they would be much smaller in power than any natural M star, and hence have to be much closer to exhibit the same apparent luminosity. In addition they would move in orbit around a planetary body, thereby displaying a periodic Doppler shift, and could have an anomalous additional gamma ray component to their spectra. An ASP engine of the type discussed would be detectable by the Hubble Space Telescope at distances as much as 50 light years, within which there are approximately 1,500 stellar systems. Their images may therefore already be present in libraries of telescopic images as unremarkable dim objects, whose artificial nature would be indicated if they were found to display parallax. It is therefore recommended that such a study be implemented.

As for cosmological implications, the combination of the attractiveness of ASP engines with Smolinian natural selection theory does provide a potential causal mechanism that could explain the fine tuning of the universe for life. Whether our own universe could have been created in such a manner remains a subject for further investigation.

References

1. Hawking, S. W. (1974). “Black hole explosions?” Nature 248(5443): 30–31. https://ui.adsabs.harvard.edu/abs/1974Natur.248…30H/abstract

2. Hawking Radiation, Wikipedia https://en.wikipedia.org/wiki/Hawking_radiation accessed September 22, 2019.

3. Arthur C. Clarke, Imperial Earth, Harcourt Brace and Jovanovich, New York, 1976.

4. Charles Sheffield, “Killing Vector,” in Galaxy, March 1978.

5. Louis Crane and Shawn Westmoreland, “Are Black Hole Starships Possible?” 2009, 2019. https://arxiv.org/pdf/0908.1803.pdf accessed September 24.

6. Freeman Dyson, “The Search for Extraterrestrial Technology,” in Selected Papers of Freeman Dyson with Commentary, Providence, American Mathematical Society. Pp. 557-571, 1996.

7. Robert Zubrin, “Detection of Extraterrestrial Civilizations via the Spectral Signature of Advanced Interstellar Spacecraft,” in Progress in the Search for Extraterrestrial Life: Proceedings of the 1993 Bioastronomy Symposium, Santa Cruz, CA, August 16-20 1993.

8. Crane, “Searching for Extraterrestrial Civilizations Using Gamma Ray Telescopes,” available at https://arxiv.org/abs/1902.09985.

9. Robert Zubrin, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, Prometheus Books, Amherst, NY, 2019.

10. Paul Davies, The Accidental Universe, Cambridge University Press, Cambridge, 1982

11. Lee Smolin, The Life of the Cosmos, Oxford University Press, NY, 1997.

12. Louis Crane, “Possible Implications of the Quantum Theory of Gravity: An Introduction to the Meduso-Anthropic principle,” 1994. https://arxiv.org/PS_cache/hep-th/pdf/9402/9402104v1.pdf

13. I provided a light hearted explanation in my science fiction satire The Holy Land (Polaris Books, 2003) where the advanced extraterrestrial priestess (3rd Class) Aurora mocks the theory of the expanding universe held by the Earthling Hamilton. “Don’t be ridiculous. The universe isn’t expanding. That’s obviously physically impossible. It only appears to be expanding because everything in it is shrinking. What silly ideas you Earthlings have.” In a more serious vein, the late physicist Robert Forward worked out what life might be like on a neutron star in his extraordinary novel Dragon’s Egg (Ballantine Books, 1980.) A similar effort to describe life on the inside of a black hole universe could be well worthwhile. Any takers?

Ladderdown Reactors

Ladderdown transmutation reactors are fringe science invented by Wil McCarthy for his science fiction novel Bloom. It is certainly nothing we will be capable of making anytime soon, but it will take somebody more knowledgeable than me to prove it impossible. Offhand I do not see anything that straight out violates the laws of physics. Ladderdown is unobtainium, not handwavium

Basically ladderdown reactors obtain their energy the same way nuclear fission does: by splitting atomic nuclei and releasing the binding energy. It is just that the ladderdown reactor can work with any element heavier than Iron-56, and the splitting does not release any neutrons or gamma radiation. Nuclear fission only works with fission fuel, and any anti-nuclear activist can tell you horror stories about the dire radiation produced.

Apparently ladderdown reactors remove protons and neutrons from the fuel material one at a time, by quantum tunneling, quietly. Unlike fission, which shoots neutrons like bullets at nuclei, shattering the nucleus into sprays of radiation and exploding fission products.

As with fission the laddered-down nuclei releases the difference in binding energy and moves down the periodic table. The process comes to a screeching halt when the fuel transmutes into Iron-56, since it is at the basin of the binding energy curve (i.e., Iron-56 has the highest binding energy per nucleon). In the novel iron is the most worthless element for this reason, and so is used for cheap building material.

Ladderdown reactors can also take fuel elements that are lighter than Iron-56, and add protons and neutrons one at a time, to make heavier elements (called "ladderup"). This is the ladderdown version of fusion, except it will work with any element lighter than Iron-56 and there is no nasty radiation produced. This is handy because laddering down heavy elements produces lots of protons as a by product, which can be laddered up into Iron-56.

Late breaking news: as it turns out, Nickel-62 has microscopically more binding energy per nucleon than Iron-56. Actually not so much "late-breaking" as "totally ignored". This has been known since the 1960s.

BLOOM

"Now that we are dependent on heavy metals rather than fossil organics and sunlight, economics have simply gone away. You want a lesson in economics from a biophysicist's point of view? It works like ecology—it breeds and selects. Not that we actually carry them in our pockets, but the gram of uranium has become our most basic unit of currency. Thanks to chronic short-staffing, we consider it equivalent to half an hour of human labor, though its energy potential is some twenty-six million times greater. Aside from ourselves, it is the first driver of our economy, the reasons for which are not at all arbitrary."

"For energy reasons," I said.

He winced slightly, shifted position in his chair. "Energy? Well, yes and no. Energy is less important than transmutation potential. In rough terms, a fusion reactor cascading a gram of deuterium/tritium up into a gram of iron—the basin of the binding energy curve—will liberate enough energy to boil about twenty thousand tons of water. A gram of uranium in a ladderdown reactor produces approximately the same. And yet, the uranium is worth ten thousand times more, because in laddering it down, we don't have to sink all the way to iron. We can stop anywhere along the way, and our waste products are isotopes of hydrogen which we can cascade back up, again stopping wherever we like below that magic number, iron fifty-six. A ladderdown economy sees value not only in what a substance is, but also in what it can become, and uranium, alone among the stable elements, can become anything." (all the elements with an atomic number over 82 {uranium} only have isotopes that are known to decompose through radioactive decay.)


Many people are surprised to learn that lead's energy potential is only twenty-five percent less than uranium's, but the thing to remember is that lead has ten fewer transmutation targets—eighty-one versus ninety-one—which translates into a factor of a thousand reduction in its value (Lead has 82 protons and uranium has 92 protons, so lead has 10 fewer transmutation targets). Gold, three rungs lower still, is worth about a five-thousandth as much as uranium (Gold has 79 proton, 13 fewer transmutation targets than uranium, 3 fewer than lead). It has beautiful mechanical and electrical properties, but really, the major cost of paving the streets with it is the labor.


The energy density of antihydrogen is about 250 times what we can achieve with ladderdown, and the production and storage are difficult. Wonderful fuel, the best, but the last time I checked, a gram of it cost over eighty thousand g.u.


Turns out we'll be paying for our food and clothing purchases after all, using, of all things, our shoes. No kidding! Our guide pointed out some bracelets, and though they were fashioned of plain gold he assured us they were very expensive. From the labor that went into them, I assumed, for they were handmade, but no, it turns out the fingernail-sized "dollars" that have been spent on our behalf are also made of gold, and derive their value from their own intrinsic worth as metal. As if we Munies walked around trading actual grams of uranium back and forth. This is what comes of not using ladderdown!

The Gladholders think our "duck shoes" are frightfully amusing anyway, and when they found out what the sole weights were made of, I thought they'd never stop laughing. and when they offered to replace those same blocks with equivalent masses of lead, which of course is five times more valuable back home, I thought we'd never stop laughing.


The biophysicist's voice came back careful, almost embarrassed. "It's never been a secret that nuclear energy presents … certain dangers. We think of ladderdown as a 'clean' technology, which in a radiation sense it certainly is."

"But?"

"But. The quantum spatial distortion is normally induced and focused within a shielded reactor, where its effects can be controlled to within a few Planck radii. How else to tunnel out only the desired nucleons, yes? But if we invert the distortion function along the B-axis, essentially turning it inside out in three-dimensional space, the same ladderdown tunneling can be induced stochastically in a much larger spherical shell, centered about the inductor. Shielding irrelevant, because it's inside the affected region, you see? Considered too hazardous for use in bloom cauterization, the phenomenon has no industrial applications. Look it up under Things Not to Try."

Most of that went right over my head, but the gist seemed clear enough: he was talking about releasing energy, lots of it, in an uncontrolled manner. He was talking about a bomb.

From BLOOM by Wil McCarthy (1998)

Mass Converters

Mass Converters are fringe science. You see them in novels like Heinlein's Farmer in the Sky, James P. Hogan's Voyage from Yesteryear, and Vonda McIntyre's Star Trek II: The Wrath of Khan. You load the hopper with anything made of matter (rocks, raw sewage, dead bodies, toxic waste, old AOL CD-ROMS, belly-button lint, etc.) and electricity comes out the other end. In the appendix to the current edition of Farmer in the Sky Dr. Jim Woosley is of the opinion that the closest scientific theory that would allow such a thing is Preon theory.

Preon theory was all the rage back in the 1980's, but it seems to have fallen into disfavor nowadays (due to the unfortunate fact that the Standard Model gives better predictions, and absolutely no evidence of preons has ever been observed). Current nuclear physics holds that all subatomic particles are either leptons or composed of groups of quarks. The developers of Preon theory thought that two classes of elementary particles does not sound very elementary at all. So they theorized that both leptons and quarks are themselves composed of smaller particles, pre-quarks or "preons". This would have many advantages.

One of the most complete Preon theory was Dr. Haim Harari's Rishon model (1979). The point of interest for our purposes is that the sub-components of electrons, neutrons, protons, and electron anti-neutrinos contain precisely enough rishon-antirishon pairs to completely annihilate. All matter is composed of electrons, neutrons, and protons. Thus it is theoretically possible in some yet as undiscovered way to cause these rishons and antirishons to mutually annihilate and thus convert matter into energy.

Both James P. Hogan and Vonda McIntyre new a good thing when they saw it, and quickly incorporated it into their novels.


Back about the same time, when I was a young man, I thought I had come up with a theoretical way to make a mass converter. Unsurprisingly it wouldn't work. My idea was to use a portion of antimatter as a catalyst. You load in the matter, and from the antimatter reserve you inject enough antimatter to convert all the matter into energy. Then feed half (or a bit more than half depending upon efficiency) into your patented Antimatter-Makertm and replenish the antimatter reserve. The end result was you fed in matter, the energy of said matter comes out, and the antimatter enables the reaction but comes out unchanged (i.e., the definition of a "catalyst").

Problem #1 was that pesky Law of Baryon Number Conservation, which would force the Antimatter-Maker to produce equal amounts of matter and antimatter. Which would mean that either your antimatter reserve would gradually be consumed or there would be no remaining energy to be output, thus ruining the entire idea. Drat!

Problem #2 is that while electron-positron annihilation produces 100% of the energy in the form of gamma-rays, proton-antiproton annihilation produces 70% as energy and 30% as worthless muons and neutrinos.

Pity, it was such a nice idea too. If you were hard up for input matter, you could divert energy away from the Antimatter-maker and towards the output. Your antimatter reserve would diminish, but if you found more matter later you could run the mass converter and divert more energy into the Antimatter-maker. This would replenish your reserve. And if you somehow totally ran out of antimatter, if another friendly ship came by it could "jump-start" you by connecting its mass converter energy output directly to your Antimatter-maker and run it until you had a good reserve.

Shorepower

Basically this is when a ship lands at the spaceport, hooks up to the port's electrical umbilical cable, pays the service charge, and powers down the ship's internal nuclear reactor. This reduces the ship's consumption of reactor fuel. There might be port anti-idling laws requiring the use of shorepower if the ship's internal power source gives off air pollution, radiation, or whatever.

If the ship insists on using its internal nuclear reactor, it may require a coolant connection from the spaceport. The ship's reactor radiators may not work very well when landed, or the spaceport may not want megawatts of thermal plumes blowing around the landing pads.

SHOREPOWER 1

Shore power or shore supply is the provision of shoreside electrical power to a ship at berth while its main and auxiliary engines are shut down. While the term denotes shore as opposed to off-shore, it is sometimes applied to aircraft or land-based vehicles (such as campers, heavy trucks with sleeping compartments and tour buses), which may plug into grid power when parked for idle reduction.

The source for land-based power may be grid power from an electric utility company, but also possibly an external remote generator. These generators may be powered by diesel or renewable energy sources such as wind or solar.

Shore power saves consumption of fuel that would otherwise be used to power vessels while in port, and eliminates the air pollution associated with consumption of that fuel. A port city may have anti-idling laws that require ships to use shore power. Use of shore power may facilitate maintenance of the ship's engines and generators, and reduces noise.

Oceangoing ships

"Cold ironing" is specifically a shipping industry term that came into use when all ships had coal-fired engines. When a ship tied up at port, there was no need to continue to feed the fire and the iron engines would literally cool down, eventually going completely cold – hence the term "cold ironing". If commercial ships can use shore-supplied power for services such as cargo handling, pumping, ventilation and lighting while in port, they need not run their own diesel engines, reducing air pollution emissions.

(ed note: maybe a rocketpunk ship will do "cold uraniuming" or "cold u-ing" ?)

From the Wikipedia entry for SHOREPOWER
SHOREPOWER 2

      Mitsuko Tamura welcomed the bulk of the machinery around her, and the illusion of privacy it afforded. Sweat beaded on her forehead beneath the heavy face shield and trickled down her temples as she slipped the tip of the soldering gun toward the broken feeder. She took comfort in the concentration such precise work demanded, directing the long tool with slender, supple fingers; it helped to push the mutterings further back in her awareness, to mute, briefly, the thousand tiny invasions of her every waking moment (Mitsuko is a telepath, and cannot turn the volume down).

     Otherwise she might have yielded to her constant urge simply to draw her legs up, slam the maintenance hatch shut behind her, and hide there until they all went away…

     She had a sudden flashing vision of the Wild Goose resting gracelessly in her concrete and girder berth, an unnatural nest for a mechanical evocation of bird-soul, coupled with a rush of affection she certainly didn’t feel for the balky, aging hardware she did battle with daily. Moses was returning to his ship.

     He was still halfway across the field. That wasn’t unusual. This was his ship, imbued with enough of his presence to sensitize her to him to begin with. Beyond that, there was nothing subtle about Moses Callahan—he thought as clearly and loudly as some people spoke, announcing his passions and preoccupations of the moment with innocent vigor, and his sleep was a bright procession of vivid dreams that seldom lingered into wakefulness.

     He was happy now, or at least happier than she'd felt him to be in the two weeks they’d spent on Hybreasil. Mitsuko wondered how long that would last. He hadn’t noticed the missing cable…

     The Wild Goose was a blunt, gunmetal-gray wedge that seemed to crouch within her berth, as though waiting for a chance to spring clear of the tracery of catwalks and loading cranes surrounding her and leap back into the skies. But the tall intake vents for her atmospheric fans were slatted shut on her topsides, while the narrow ports that looked in on her underslung flight deck were empty and dark. She needed him, Moses thought, to put the breath back in her and restore the light of life and purpose to her eyes, and the prospect of a ship alive under his feet again was a glad thing.

     The gladness broke against a quick spark of irritation when the passenger hatch ignored his key, remaining obstinately closed. Moses cursed and slid back the cover plate to the manual override.

     “Spooky!” (the Captain's nickname for Mitsuko is "Spooky", since she is always skittish. Captain doesn't know this is because Mitsuko is a telepath, since psionic people are illegal)

     The narrow passenger deck corridor was empty and dark, lit only by the sunlight admitted by the open airlock and the scattered glows of the emergency lanterns.

     “Spooky!” Moses called again. “What the hell have you got the power off for?” Cursing, he turned and levered the airlock through its cycle again, cutting off the daylight. He stood blinking in the scarlet glow for a moment, then tumed and started aft.

     Mitsuko nearly landed on top of him as she dropped down through the drive-room ladderway. She didn’t pause, but started forward toward the flight deck, with Callahan lumbering after her like a bear trying to follow a marten down its burrow.

     “Dammit, Spooky, what the hell have you got the power down for?”

     “I haven’t got the power down, the port’s got it down, Moses.” She stopped at the bulkhead before them, to throw her full forty-five kilos’ weight on the manual hatch lever. Callahan leaned past her and shoved it into place. The hatch slid open and she ducked in. “The port pulled our umbilical for nonpayment (the captain had fallen on hard times and was trying to economize). When I went to cut in the on-board power, the converter blew out. I told you that was going to happen.”

From THE SHATTERED STARS by Richard S. McEnroe (1984)

Ship to Shore

Electricity can flow both ways.

On December 17 1929, the city of Tacoma Washington was suffering from a drought. The city's hydroelectric dams did not have enough water to generate electricity. Tacoma was about to go dark. They begged President Herbert Hoover to help.

It just so happened that the aircraft carrier USS Lexington (CV 2) was being refurbished at the Puget Sound Navy Yard, right near Tacoma. The Lexington was dispatched to Tacoma’s Baker Dock, was hooked up to the city's power grid, and used steam turbines to generate power. The ship stayed at Backer Dock from December 17, 1929 to January 16, 1930; feeding the city 4 million kilowatt-hours of electricity. By mid January enough snow had melted to power the hydroelectric dams and the Lexington could disconnect. The city was saved.

This is called "Ship to Shore Power for Humanitarian Purposes".

Dave Hinerman noted that in the 1950s and 60s the US Navy had at least one destroyer outfitted with additional generators to provide emergency power to shore installations and cities.

Note that if the ship supplying the power is using a nuclear reactor, it will suck cold water from the sea as reactor coolant. If the ship plant is using coal, oil, or other petrochemical, it will just need a smokestack. Meaning that a hypothetical nuclear-powered aircraft will have a hard time supplying a land-locked city with power unless there is a nearby lake to supply coolant.

In a RocketPunk future, any spacecraft with an onboard power source that does not depend upon the engines thrusting can do the same trick. This can come in handy if the planetary spaceport got hit by a hurricane, a Belter asteroid colony suffers a failure of their nuclear reactor, or if a settlement suffering from the Long Night does not have the spare part or the repair skills to fix their power plant. A visiting spacecraft can save the day. This would be a useful capability to build into a Long Night Insurance Ship.

This can be tricky if the spacecraft's power reactor relies upon a heat radiator for cooling the reactor. Liquid droplet radiators do not like being used on a planet with significant gravity, and they are problematic on a planet with a windy atmosphere. The radiant heat can also damage anything that gets too close: other spacecraft, space stations, careless astronauts, industrial installations, etc. Landed on a planet with an atmosphere, plumes of very hot air blowing around can be a problem.

A spacecraft with a Bimodal Nuclear Engine is especially suited to do Ship to Shore, since it is already set up to produce electricity.

I was thinking that a mobile emergency power plant would be a nice addition to the Thunderbirds, perhaps as Thunderbird 7. Or at least as a new detachable pod for Thunderbird 2. But I digress.


Dr. Rachel Pawling of the faculty of UCL Engineering Sciences notes that when conventional nuclear submarines do ship-to-shore with a coast city in distress, there is sometimes a problem.

The generating capacity is usually there, but the problem is infrastructure and interconnectors.

Essentially the customer (harbour) is now acting as a supplier.

from a tweet by Dr. Rachel Pawling (2021)

Associate professor of naval architecture Dr. Nick Bradbeer notes that when conventional nuclear submarines do ship-to-shore with a coast city in distress, there is sometimes another problem.

The problem is that most of a (typical) nuclear sub's reactor steam goes into direct drive turbines to run the shaft. The turbogenerators to produce electrical power are much smaller; far below the capacity of the reactor.

Turboelectric designs have existed.

from a tweet by Dr. Nick Bradbeer (2021)

Stanley Borowski Bimodal NTR spacecraft has much the same problem. Under thrust the nuclear reactor is cranking out a whopping 335 megawatts. But when used bimodally as a power generator it is throttled down to only 110 kilowatts. This is mostly due to dealing with the waste heat.

Under thrust, the waste heat from the reactor at 335 megawatts is gotten rid of by the magic of open-cycle cooling. This adds zero penalty-mass to the ship's structure.

Lamentably, when used as a power generator, open-cycle cooling cannot be used. Instead, a physical heat radiator is employed. Which does add penalty-mass. The Borowski cut the power budget to the bone with a measly 110 kilowatts, but even that needed 71 square meters of radiator.

The equivalent of Dr. Bradbeer's turboelectric design would be Borowski's Bimodal Hybrid NTR NEP. This has NTR rocket engine cooled with open-cycle cooling, but they are only used for thrust-critical parts of the mission. For the rest it has lots and lots of heat radiators and a large electrical power plant used to feed an ion drive.

SHIP TO SHORE 1

This paper was originally intended to be a follow on my experiences as an engineer aboard commercial tankers. The original intent was to provide a description of World War II-built turboelectric Destroyer Escorts and to illustrate the commonality they shared with commercial T-2 Tanker power plants. In the process of preparing this post, it became apparent that it would be desirable to expand it’s scope to include a discussion of the experiences that the U.S. Navy had in the delivery of ship to shore electrical power for humanitarian assistance.

In general, the amount of ship to shore power that can be delivered by US naval vessels is limited by a number of factors, including installed generating plant capacity and the availability of topside shore power connections. By far, the majority of current naval vessel electrical distribution systems are three phase, 450 VAC, 60 Hz. Exceptions are nuclear carriers, the newest LHA and LHD types, and the DDG 1000 Class ships which have (or will have) 4160 VAC distribution systems. The new T-AKE 1 Class ships operated by MSC have 6600 VAC integrated diesel-electric power plants. As a general rule of thumb, it becomes necessary to go to higher voltages aboard ships with generating plants with a capacity of 10,000 kW or greater because of circuit breaker interrupting capacity and cable limitations.

The DDG 51 Flight IIA Class will be used as an example to illustrate existing limitations in generating plant capacity. Each of these ships is fitted with three gas turbine driven ship service generators (SSGTG), rated at 2500 kW 450 VAC, 60 Hz (3000 kW on DDG 91 and follow). Using these ships as an example, this would appear to provide a total generating plant capacity of at least 7500 kW. However, there are several additional limitations that must be taken into account.

  1. Generators must never be intentionally loaded to more than 90% capacity.
  2. Due to circuit breaker limitations, only two sets may be operated continuously in parallel. The third set serves as a standby unit.

Given these limitations, the usable generating plant capacity aboard these ships is approximately 4500 kW (5400 kW on the later ships). In addition, ships must supply their own in-port services that may be as much as 2500 kW or more. This only leaves a margin of about 2000 to 3000 kW of available excess energy. This is a bit misleading because the ships only had two topside shore power connections, each consisting of four cables; each rated at 400 amperes giving a total of 3200 amperes through eight cables. Assuming a power factor of .8 and no more than 90% loading, this results in a total delivery capability of about 1800 kW. Coupled with the fact that many of the countries that could require humanitarian relief have 50 Hz distribution systems, these factors impose severe limitations on the ability of modern surface combatant ships to deliver shore power. Destroyer tenders (AD) and submarine tenders (AS) could deliver as much as 7000 kW at 450 VAC to ships alongside them. Only two submarine tenders remain in service as of 2014, USS Emory S. Land (AS 39), based in Diego Garcia, and USS Frank Cable (AS 40), based in Guam. Modern ships with integrated electric drive plants have high generating plant capacities. However, significant alterations would be required to make them capable of delivering large amounts of shore power.

The above limitations did not exist aboard older ships with turboelectric and diesel-electric plants because these ships had separate propulsion and ship service generating plants. It was possible to divert the bulk of the power from the main propulsion generators to shore at either 50 or 60 Hz provided that adequate cable reels were available. Some examples are discussed in the following paragraphs.

Approximately 440 Destroyer Escorts (DE) was built between 1943 and 1944. Ninety-five of them were converted to high-speed transports, and another seventy-eight was delivered to the United Kingdom under the Lend Lease agreement where they served as Captain Class frigates. The ships were divided into six classes and had four different propulsion plants including geared steam turbine, turboelectric, geared diesel, and diesel-electric systems.

One hundred and two ships of the Buckley (DE 51) and an additional twenty-two ships of the Rudderow (DE 224) had twin-screw turboelectric (TE) propulsion plants rated at 12,000 SHP. Maximum sustained speed was approximately twenty-four knots. A major reason for the use of turboelectric propulsion systems was limitations in reduction gear manufacturing capabilities during the war. Priority had to be given to manufacturing the double reduction gears required on destroyers, which had propulsion plants rated at 60,000 SHP. General Electric and Westinghouse manufactured the systems. They had many commonalities with the propulsion plants aboard the T-2 tankers described in a previous post. The machinery arrangement was similar to that aboard navy destroyers with alternating fire rooms and engine rooms. Each fire room contained a single D type boiler which produced superheated steam at a pressure of 450 PSI and a temperature of 750° F. Each engine room contained one main propulsion generator rated at 4600 kW, 2700 VAC, 93.3 Hz, 5400 RPM, one ship service turbo generator rated at 300 kW at 450 VAC/40 kW DC, and a 6000 SHP, 400 RPM main propulsion motor. The main propulsion control consoles were very similar in appearance to those on T-2 tankers. The ships had the capability of operating both main motors on a single main generator.

During World War II, a total of five ships of the Buckley Class and two British Captain Class frigates were converted into floating power stations for the purpose of supplying electrical power to shore in the event of a power outage. It is understood that a number of other ships of the class were recycled as floating power stations for coastal cities in Latin America under a program sponsored by the World Bank. However, no additional information is readily available concerning this program. A discussion of the services provided by the five Buckley Class ships is contained in the following paragraphs.

A major part of the conversion process consisted of the removal of torpedo tubes and installation of large cable reels located on the O1 Deck, as shown in the following illustrations:

The floating power plants had a total generating plant capacity of approximately 8000 kW (estimated), 2300 VAC, 60 Hz, .8 Power Factor. This equates to a usable generating plant capacity of approximately 7200 kW taking into account the 90% load factor. 50 Hz power could be easily provided to locations where necessary. The only action required was to slow the main generators down from 3600 to 3000 RPM by the use of the governor control levers. This capability does not exist in any vessels currently in service.

USS Donnell (DE 56) was converted into a power barge in England in 1944 after a torpedo struck it during convoy duty. Damage was fairly extensive and propulsion power could not be readily restored. The ship was then towed to Cherbourg, France, where it supplied power for a period of time. This experiment was considered to be very successful. It resulted in the decision to convert the other vessels on this list into floating power plants.

USS Foss (DE 59) provided power to the city of Portland, Maine, in 1947-1948 during a severe drought and a number of forest fires. At the time, it was assigned to operational development duties along with its sister ship, the USS Maloy (DE 791). There is no record of Maloy ever being converted into a floating power plant. Foss later supplied shore power to various ports in Korea in 1950-1951.

USS Whitehurst (DE 634) and USS Wiseman (DE 667) supplied power to the city of Manila for several months in 1945. During that period, Wiseman also provided drinking water to Army facilities in the harbor area. Wiseman later supplied power to the city of Masan, South Korea in 1950. USS Marsh (DE 699) supplied power to the island of Kwajelin from May until September in 1946. It later supplied power to the cities of Masan and Pusan in 1950 during the Korean War.

USS Lexington (CV-2) and USS Saratoga (CV-3) entered service in 1928. Both ships were ahead of their time. They were fitted with turbo-electric propulsion systems rated at 180,000 SHP. The ships had four steam turbine driven main propulsion generators each rated at 35,200 kW, 5000 VAC. Unlike more modern installations, the plants were not integrated and ship service power was DC supplied by 6 separate generators, each rated at 750 kW, 240 Volts DC. Up until the early 1930s, the only use the U.S. Navy had made of AC was in the propulsion systems aboard the USS Langley (CV-1) and six battleships that entered service in the 1920s.

In 1929, Washington State suffered a drought that resulted in a loss of hydroelectric power to the city of Tacoma. The U.S. Navy sent Lexington, which had been in the shipyard at Bremerton to Tacoma to provide power to the city. A considerable amount of coordination was required between the city and the ship in order to allow Lexington to provide power. The hookup consisted of twelve cables connected to circuit breakers and a bank of transformers located on the dock with a total rating of 20,000 kVA. The ship then provided a total of 4,520,960 kilowatt hours from one main propulsion generator between 17 December, 1929 until 16 January, 1930, at an average rating of 13,000 kW until melting snow and rain brought the local reservoirs up to a level where normal power could be restored.

The US Army also had a Nuclear Power Program in the 1960s. As part of this program they converted an existing Liberty ship into the Sturgis (MH-1A), a floating nuclear power station. This involved the removal of the existing propulsion plant and installation of a pressurized water reactor in a 350-ton containment vessel. After several months of testing Sturgis was towed to the Panama Canal Zone where it supplied 10,000 kW of power to operate the locks from 1968 through 1976 because of a water shortage which had an impact due to the loss of hydroelectric power. Unfortunately, the cost of operation proved to be very high and Sturgis was retired in 1976 after the Army Reactor Program was discontinued. Sturgis was then defueled and placed into the James River Reserve Fleet.

References:

  1. NAVPERS 10864-C – Shipboard Electrical Systems, 1969
  2. Paper – Ship to Shore Power, US Navy Humanitarian Relief, Scott, 2006
  3. Transactions, SNAME, 1929
  4. NAVSEA Ship Information Book, AS39/40
  5. DDG 51 Flight IIA Electrical Plant Load Analysis
  6. NAVSOURCE
  7. USS Lexington (CV-2) report following supplying power to the City of Tacoma for a month, 1930.
SHIP TO SHORE 2

Background

While the author was training to become a US Navy Enlisted Reactor Operator, qualified operators repeatedly stated, “This sub could power a small city.” In a similar vein, it was proposed that US Navy ships should provide electrical power during the response to Hurricane Katrina in New Orleans. These off the cuff assessments prompted a more realistic assessment: is it feasible to power facilities ashore from a ship?

History

During World War II, there were seven destroyer escorts converted into Turbo-Electric Generators (TEG) specifically for the purpose of providing electrical power to shore facilities. They were the Donnell (DE-56), Foss (DE-59), Whitehurst (DE-634), Wiseman (DE-667), Marsh (DE-699), and two British lend-lease ships; Spragge (K-572, ex-DE-563) and Hotham (K-583 ex-DE-574). Data for these ships are sparse in general.

Consider the Wiseman, for which more data is available. This ship had oil fired boilers producing steam to turn turbine generators which in turn powered electric propulsion motors. This electric ship configuration is optimal for providing electric power ashore since all the power in the ship is already being converted to electric. The Wiseman had transformers and cable reels topside to deliver power at high voltages over relatively long distances. Wiseman powered the city of Manila during WWII and the port of Mason during the Korean War. Wiseman delivered 5,806,000 kWh to Manila over five and a half months, giving an average generation capability greater than 1.4 MW.

The US Army also used ship to shore power to power remote stations. One notable case is that of the Sturgis/MH-1A, A WWII era Liberty ship equipped with a nuclear power plant used to provide power to the Panama Canal Zone from 1968 to 19753. The MH-1A power plant on the Sturgis generated 10MW electrical power which allowed the canal locks to be operated more frequently.

Thus history shows that ships can provide power to the shore, if only in limited amounts, and using specialized ships.

Present

There are currently no US Navy ships designed specifically to provide power to the shore. They are however designed to be powered from the shore and this capability could be used to act as a power source. For example, the author’s ship, USS Key West (SSN-722), a Los Angeles class nuclear powered fast attack submarine, once received ‘shore power’ from a destroyer while moored alongside the destroyer anchored off Monaco. This allowed the labor intensive nuclear reactor plant on the submarine to be shutdown. The gas turbine generators on the destroyer require fewer watchstanders and had to run to power the destroyer’s own loads. This anecdotal evidence shows that power can be made to flow from at least one US Navy ship and conceivably could flow from most.

The capability to provide power can be evaluated by considering the ship as a load and assuming that whatever power it can draw, it can deliver. For USN ships smaller than carriers and amphibious ships, the unit of measure is the single shore power cable. These cables are rated to 400A at 450V 3 phase or 0.312MW assuming a unity power factor4. Submarines and surface combatant ships typically can connect up to eight cables, yielding a total of 2.5MW. For a carrier, the shore power supply must deliver 21MVA at 4160V5. Amphibious ships are presumably between these values. Without significant changes, current Navy ships could theoretically supply 2.5 to 21MW of electrical power to the shore. This again assumes generation capacity to match the ship as a load and also assumes this capacity is above that required to power the ship and its power plant.

What if more power is needed? More ships could be used, but there is also more power onboard each ship. This other power is the power for propulsion. Remembering the Wiseman, it was an ideal ship for supplying power because all the power of the boilerswas first converted to electricity by turbine generators. Today’s Navy ships are not ‘all electric’ and so a significant portion of the power onboard is dedicated to propulsion and is often coupled directly to the propeller shafts. Steam plants fired by oil or nuclear reactors offer a sort of middle ground. While the propulsion turbines are coupled to the shafts, the steam can be diverted upstream. In this scheme, high pressure steam would be piped out of the ship and used to drive a larger turbine generator. The spent low energy steam and condensate would then be piped back into the ship and into the condensate system, closing the loop. Piping is not as forgiving or flexible as cabling, this would not be a trivial set up and is probably impossible for a submarine.

Considering the publicly available shaft horsepower ratings for the ship as the electrical power available, it is clear that much more power is in the hulls than is available through the shore power connections.

Table 1
Power Available From Steam Plant Ships
Ship typeTotal
Shaft
Power
(HP)
Electrical
Power
(MW)
Fast Attack Submarine35,00026
Large Deck Amphib70,00052
Carrier260,000194

Note that most surface combatants are driven by gas turbine or diesel engines and their propulsion power cannot feasibly be extracted from the ship.

Future

The Navy is driving toward all electric ships in a case of history repeating. This is driven by the desire to access propulsion power to supply combat systems. As stated previously, all electric ships are ideal for providing power to shore since all their power is first converted to electricity. The future destroyer DD(X) is being designed as an all electric ship with two Rolls-Royce MT-30 gas turbine generators producing a total of 78MW of electrical power.

The future carrier CVN(X) will be nuclear powered and have a steam plant but will also have increased electrical generation (104MVA) to support launching planes using electrical power.

Neither DD(X) nor CVN(X) is designed to deliver power outside the hull, but it would be easier to export it as electrical current than as steam. Loads

To investigate the claim of powering a small city, a ‘rough order of magnitude’ (ROM) calculation was performed. The author’s most recent electrical utility bill was used to determine the average power of a house, and then this number was used to determine how many houses could be powered. The bill was for 1203kWh over a 29 day period giving an average load of 1.7kW. Again, this is a ROM calculation and does not incorporate seasonal variations in power use nor the likelihood of reduced use in an emergency situation.

Using the existing shore to ship power capability, the submarine can power 1,500 nominal homes: more of a town than a small city. The carrier can power 12,000 houses and that is a small city.

Table 2
Powering Houses with
Existing Ships Equipment
Ship typeShore
Power
(MW)
Houses
Submarine2.51,500
Carrier2112,000

If the steam plant of an amphib or a carrier were modified to increase electrical generation to match propulsion power, many more houses could be powered, equivalent to a medium city based on population only.

Table 3
Powering Houses with Steam Plants
Ship typeSteam
Power
(MW)
Houses
Amphib5231,000
Carrier190110,000

Lastly, if all the generation capability of future ship classes could be made available external to the hull, a large population could be supplied.

Table 4
Powering Houses with Future Ships
Ship typeGeneration
Power
(MW)
Houses
DD(X)7846,000
CVN(X)10461,000

The US Navy is not chartered to act as a power utility, they are not likely to power the shore except at forward military or disaster locations. In these cases, residential housing is not likely to be the first load supplied. Instead, hospitals and other vital infrastructure are likely to receive priority. This prioritization is important since a single hospital can be a significant load. Based on one report discussing emergency generation installation, a value of 2MW per hospital was determined.

Using shore power, a submarine or surface combatant can power one hospital with a small surplus. This undermines the claim for a small city since few loads will be powered after the hospital. A carrier can power ten and a half hospitals, likely allowing some residential power after the vital infrastructure is supplied.

MH-1A

MH-1A was the first floating nuclear power station. Named Sturgis after General Samuel D. Sturgis, Jr., this pressurized water reactor built in a converted Liberty ship was part of a series of reactors in the US Army Nuclear Power Program, which aimed to develop small nuclear reactors to generate electrical and space-heating energy primarily at remote, relatively inaccessible sites. Its designation stood for mobile, high power. After its first criticality in 1967, MH-1A was towed to the Panama Canal Zone that it supplied with 10 MW of electricity from October 1968 to 1975. Its dismantling began in 2014 and was completed in March 2019.

Design

The MH-1A was designed as a towed craft because it was expected to stay anchored for most of its life, making it uneconomical to keep the ship's own propulsion system.

It contained a single-loop pressurized water reactor, in a 350-ton containment vessel, using low enriched uranium (4% to 7% 235U) as fuel.

The MH-1A had an elaborate analog-computer-powered simulator installed at Fort Belvoir. The MH-1A simulator was obtained by Memphis State University Center for Nuclear Studies in the early 1980s, but was never restored or returned to operational service. Its fate is unknown after the Center for Nuclear Studies closed.

Panama Canal Zone, 1968–1976

After testing at Fort Belvoir for five months starting in January 1967, Sturgis was towed to the Panama Canal Zone. The reactor supplied 10 MW (13,000 hp) electricity to the Panama Canal Zone from October 1968 to 1975.

A water shortage in early 1968 jeopardized both the efficient operation of the Panama Canal locks and the production of hydroelectric power for the Canal Zone. Vast amounts of water were required to operate the locks and the water level on Gatun Lake fell drastically during the December-to-May dry season, which necessitated curtailment of operations at Gatun Hydroelectric Station

The ship was moored in Gatun Lake, between the Gatun Locks and the Chagres dam spillway. Beginning in October 1968 the 10 MW electrical power produced by the MH-1A plant aboard the Sturgis allowed it to replace the power from the Gatun Hydroelectric Station, which freed the lake water for navigation use. To help out further, the Andrew J. Weber, a diesel-fueled power barge of 20 MW capacity, was deployed to the Canal Zone in November 1968. These two barges not only contributed to meeting the Canal Zone’s power requirements, but also made possible the saving of vast quantities of water that otherwise would have been needed to operate the hydroelectric power station. The Corps of Engineers estimates that over one trillion gallons were saved (or, rather, freed up) between October 1968 and October 1972 – enough to permit fifteen additional ships to pass through the locks of the canal each day.

After one year of operations in the Canal Zone, the MH-1A reactor had to be refueled, a process which took one week (17–25 October 1969), according to a 1969 Corps of Engineers report. According to a 2001 report by the Federation of American Scientists, the MH-1A reactor had a total of five cores during its operational life. It used low-enriched uranium (LEU) in the range of 4 to 7 %, with a total amount of uranium-235 supplied being 541.4 kilograms (for the five cores).

The Sturgis was eventually replaced by two 21 MW Hitachi turbines, one on the Pacific side of the isthmus and one on the Atlantic side.

From the Wikipedia entry for MH-1A

Power Storage

Often the power plant generates more power than is currently needed. Spacecraft cannot afford to throw the excess power away, it has to be stored for later use. This is analogous to Terran solar power plants, they don't work at night so you have to store some power by day.

Energy Transport Mechanism

There are a couple of instances where people make the mistake of labeling something a "power source" when actually it is an "energy transport mechanism." The most common example is hydrogen. Let me explain.

In the so-called "hydrogen economy", proponents point out how hydrogen is a "green" fuel, unlike nasty petroleum or gasoline. Burn gasoline and in addition to energy you also produce toxic air pollution. Burn hydrogen and the only additional product is pure water.

The problem is they are calling the hydrogen a fuel, which it isn't.

While there do exist petroleum wells, there ain't no such thing as a hydrogen well. You can't find hydrogen just lying around somewhere, the stuff is far too reactive. Hydrogen has to be generated by some other process, which consumes energy (such as electrolysing water using electricity generated by a coal-fired power plant). Not to mention the energy cost of compressing the hydrogen into liquid, transporting the liquid hydrogen in a power-hungry cryogenically cooled tank, and the power required to burn it and harvest electricity.

This is why hydrogen is not a fuel, it is an energy transport mechanism. It is basically being used to transport the energy from the coal-fired power plant into the hydrogen burning automobile. Or part of the energy, since these things are never 100% efficient.

In essence, the hydrogen is fulling much the same role as the copper power lines leading from a power plant to a residential home. It is transporting the energy from the plant to the home. Or you can look at the hydrogen as sort of a rechargable battery, for example as used in a regenerative fuel cell. But one with rather poor efficiency.

The main example from science fiction is antimatter "fuel." Unless the science fiction universe contains antimatter mines, it is an energy transport mechanism with a truly ugly efficency.

THE ULTIMATE WEAPON

Buck Kendall has invented a sort of super-battery that will store huge amounts of electricity with incredible efficiency. It stores the power in pools of mercury.

"That's it, Tom. I wanted to show you first what we have, and why I wanted all that mercury. Within three weeks, every man, woman and child in the system will be clamoring for mercury metal. That's the perfect accumulator." Quickly he demonstrated the machine, charging it, and then discharging it. It was better than 99.95% efficient on the charge, and was 100% efficient on the discharge.

"Physically, any metal will do. Technically, mercury is best for a number of reasons. It's a liquid. I can, and do it in this, charge a certain quantity, and then move it up to the storage tank. Charge another pool, and move it up. In discharge, I can let a stream flow in continuously if I required a steady, terrific drain of power without interruption. If I wanted it for more normal service, I'd discharge a pool, drain it, refill the receiver, and discharge a second pool. Thus, mercury is the metal to use.

"Do you see why I wanted all that metal?"

"I do, Buck — Lord, I do," gasped Faragaut. "That is the perfect power supply."

"No, confound it, it isn't. It's a secondary source. It isn't primary. We're just as limited in the supply of power as ever — only we have increased our distribution of power."

From THE ULTIMATE WEAPON by John W. Campbell, jr. (1966)

Batteries

What is needed are so-called "secondary" batteries, commonly known as "rechargable" batteries. If the batteries are not rechargable then they are worthless for power storage. As you probably already figured out, "primary" batteries are the non-rechargable kind; like the ones you use in your flashlight until they go dead, then throw into the garbage.

Current rechargable batteries are heavy, bulky, vulnerable to the space environment, and have a risk of bursting into flame. Just ask anybody who had their laptop computer unexpectedly do an impression of an incindiary grenade.

Nickle-Cadmium and Nickle-Hydrogen rechargables have a specific energy of 24 to 35 Wh/kg (0.086 to 0.13 MJ/kg), an energy density of 0.01 to 0.08 Wh/m3, and an operating temperature range of -5 to 30°C. They have a service life of more than 50,000 recharge cylces, and a mission life of more than 10 years. Their drawbacks are being heavy, bulky, and a limited operationg temperature range.

Lithium-Ion rechargables have a specfic energy of 100 Wh/kg (0.36 MJ/kg), an energy density of 0.25 Wh/m3, and an operating temperature range of -20 to 30°C. They have a service life of about 400 recharge cylces, and a mission life of about 2 years. Their drawbacks are the pathetic service and mission life.

Flywheels

A flywheel is a rotating mechanical device that is used to store rotational energy. In a clever "two-functions for the mass-price of one" bargain a flyweel can also be used a a momentum wheel for attitude control. NASA adores these bargains because every gram counts.

Flywheels have a theoretical maximum specific energy of 2,700 Wh/kg (9.7 MJ/kg). They can quickly deliver their energy, can be fully discharged repetedly without harm, and have the lowest self-discharge rate of any known electrical storage system. NASA is not currently using flywheels, though they did have a prototype for the ISS that had a specific energy of 30 Wh/kg (0.11 MJ/kg).

SUPER ROTORS

Previously I have tweeted on super-strong carbon nanotube fibres for use in energy storing flywheels, based on this News story.

First, a review of the underlying physics.

From basic rotational physics, we can describe the flywheel rotor as a solid cylinder of even composition and constant density ρ. For any rotating object the important figures of merit are the Moment of Inertia I and the angular velocity ω. For a cylinder of mass m, length l and circular radius r the Moment of Inertia is:

Rotation Kinetic Energy is:

Material strength limits the flywheel rotor’s performance. Stress in the flywheel’s material is from the centrifugal reaction force that is acting to explode the rotor. The rotor material’s molecular structure must counter that with its tensile strength, the force that the material exerts on itself to keep it together. In the case of the purported nanotube fibre it’s internal strength is measured as upwards of 80 billion pascals or 80 gigapascals (GPa). Steels typically have 0.25 GPa tensile strength, so the nanotube material is 320 times stronger.

In a spinning rotor the stress to be countered by material strength at its maximum radius is:

The maximum the rotor can safely spin is when that stress equals its tensile strength. Past that point and the material will eventually ‘fail’, pulling itself apart violently due to all its kinetic energy, likely vapourising it in the process. To operate safely the rotor should be run at a maximum of some fraction of that limit. A factor of 50% is considered reasonable, allowing wiggle room for fluctuations. Thus the maximum operating stress should be about 2/3 of its maximum – in this example 2/3 x 80 GPa = 54 GPa.

Notice that the stress and the rotational kinetic energy look very similar. In fact their relationship is simply:

This allows the energy Figure of Merit, the Specific Energy Density or stored energy per unit mass – to be derived as:

For the carbon nanotube material, with a density of about 1,300 kg/m3, and an operating maximum stress of 54 GPa, that means a specific energy density of 10 MJ/kg.

Consider the power storage needs of the Starshot interstellar sail, which masses 2 grams and cruises to Alpha Centauri at 0.25 c. About 60 terajoules per Starshot is needed, expended over about 20 minutes. Assuming near perfect conversion from rotational energy to laser power, the mass of spinning rotors needed per shot is about 6,000 tonnes. This can be an arrangement of multiple flywheels, hooked up to a massive solar array farm or a high efficiency nuclear reactor, that can be powered up over several hours.

From SUPER ROTORS TO POWER STARSHOTS by Adam Crowl (2020)

Regenerative Fuel Cells

A "regenerative" or "reverse" fuel cell is one that saves the water output, and uses a secondary power source (such as a solar power array) to run an electrolysers to split the water back into oxygen and hydrogen. This is only worth while if the mass of the secondary power source is low compared to the mass of the water. But it is attractive since most life support systems are already going to include electrolysers anyway.

In essence the secondary power source is creating fuel-cell fuel as a kind of battery to store power. It is just that a fuel cell is required to extract the power from the "battery."

Currently there exist no regenerative fuel cells that are space-rated. The current goal is for such a cell with a specific energy of up to 1,500 Wh/kg (5.4 MJ/kg), a charge/discharge efficiency up to 70%, and a service life of up to 10,000 hours.

Superconducting magnetic energy storage

SUPERCONDUCTING MAGNETIC ENERGY STORAGE
Superconducting Magnetic
Energy Storage
Specific energy4–40 kJ/kg
(0.004–0.04 MJ/kg)
(1–11 Wh/kg)
Energy densityless than 40 kJ / L
Specific power10–100,000 kW/kg
Charge/discharge
efficiency
95%
Self-discharge rate>0% at 4 K
100% at 140 K
Cycle durabilityUnlimited cycles

Superconducting Magnetic Energy Storage (SMES) systems store energy in the magnetic field created by the flow of direct current in a superconducting coil which has been cryogenically cooled to a temperature below its superconducting critical temperature.

A typical SMES system includes three parts: superconducting coil, power conditioning system and cryogenically cooled refrigerator. Once the superconducting coil is charged, the current will not decay and the magnetic energy can be stored indefinitely.

The stored energy can be released back to the network by discharging the coil. The power conditioning system uses an inverter/rectifier to transform alternating current (AC) power to direct current or convert DC back to AC power. The inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems are highly efficient; the round-trip efficiency is greater than 95%.

Due to the energy requirements of refrigeration and the high cost of superconducting wire, SMES is currently used for short duration energy storage. Therefore, SMES is most commonly devoted to improving power quality.


Low-temperature versus high-temperature superconductors

Under steady state conditions and in the superconducting state, the coil resistance is negligible. However, the refrigerator necessary to keep the superconductor cool requires electric power and this refrigeration energy must be considered when evaluating the efficiency of SMES as an energy storage device.

Although the high-temperature superconductor (HTSC) has higher critical temperature, flux lattice melting takes place in moderate magnetic fields around a temperature lower than this critical temperature. The heat loads that must be removed by the cooling system include conduction through the support system, radiation from warmer to colder surfaces, AC losses in the conductor (during charge and discharge), and losses from the cold–to-warm power leads that connect the cold coil to the power conditioning system. Conduction and radiation losses are minimized by proper design of thermal surfaces. Lead losses can be minimized by good design of the leads. AC losses depend on the design of the conductor, the duty cycle of the device and the power rating.

The refrigeration requirements for HTSC and low-temperature superconductor (LTSC) toroidal coils for the baseline temperatures of 77 K, 20 K, and 4.2 K, increases in that order. The refrigeration requirements here is defined as electrical power to operate the refrigeration system. As the stored energy increases by a factor of 100, refrigeration cost only goes up by a factor of 20. Also, the savings in refrigeration for an HTSC system is larger (by 60% to 70%) than for an LTSC systems.

From the Wikipedia entry for
SUPERCONDUCTING MAGNETIC ENERGY STORAGE
LIMITS ON SUPERCONDUCTING BATTERIES

There are two significant limits.


First is the force trying to make the superconductor explode.

You can consider that the energy of a persistent supercurrent circulating through a superconductor is stored in the magnetic field it produces. The best design is thus to wrap your superconducting wire into a solenoid (or inductor, or electromagnet). To avoid annoying effects from the extremely strong field leaking out of the end, wrap the ends of the solenoid around so they join, giving a torroidal (or doughnut shaped) configuration. Now the field will act to maintain the current that produces it, and can induce strong "voltages" (technically an electromotive force, or EMF, but for practical purposes you can treat it as a voltage from a battery) to drive the current through any load you apply to it.

But now you have a problem. The current produces the field, and you need the field to maintain the current. But the field also exerts a force on the current, pushing the current-carrying loops apart and trying to expand them. For high currents and strong field (what you get when you are storing lots of energy), these forces can be high enough to rip matter apart and make your superconductive "battery" explode.

The way to avoid this is to support the superconductive wire with a very strong backing material that holds it in place. The upper limit on the energy storage per unit weight comes down, ultimately, to the strength of the chemical bonds that hold your backing material together. The best you can do here is use some strongly-bound light element. The carbon-carbon chemical bond is going to be ideal. So a carbon super-material like carbon nanotubes or graphene will give you the best energy per weight. The theoretical upper limit is around 40 to 50 MJ/kg (11,000 to 14,000 Wh/kg). Of course, a power storage unit energized up to this limit will be on the verge of failure, and failure means exploding with ten times its weight of TNT (4.2 MJ/kg). So throw in some engineering safety factors of 2 or 3 (25 to 17 MJ/kg or 7,000 to 5,000 Wh/kg).


The other limit is that high enough magnetic fields will shut down a superconductor. This is called the critical field, here the substance goes back to being a normal conductor instead of a superconductor (with the subsequent loss of all of your energy to resistive heating and probably exploding, again). This puts an upper limit on your energy per volume rather than energy per mass. I'm not aware of any theoretical upper limit on what the critical field could be, so you can probably adjust this to whatever you need. Note that you need a safety factor here, too, since the critical field decreases as temperature increases. You don't want your power supply to turn into a bomb just because the air conditioning starts acting up.

From a Google Plus thread entry Luke Campbell (2017)

Kerr-Newman black hole

The popular conception of a black hole is that it sucks everything in, and nothing gets out. However, it is theoretically possible to extract energy from a black hole, for certain values of "from."

And by the way, there appears to be no truth to the rumor that Russian astrophysicists use a different term, since "black hole" in the Russian language has a scatological meaning. It's an urban legend, I don't care what you read in Dragon's Egg.

For an incredibly dense object with an escape velocity higher than the speed of light which warps the very fabric of space around them, black holes are simple objects. Due to their very nature they only have three characteristics: mass, spin (angular momentum), and electric charge. All the other characteristics got crushed away (well, technically they also have magnetic moment, but that is uniquely determined by the other three). All black holes have mass, but some have zero spin and others have zero charge.

There are four types of black holes. If it only has mass, it is a Schwarzschild black hole. If it has mass and charge but no spin, it is a Reissner-Nordström black hole. If it has mass and spin but no charge it is a Kerr black hole. And if it has mass, charge and spin it is a Kerr-Newman black hole. Since practically all natural astronomical objects have spin but no charge, all naturally occurring black holes are Kerr black holes, the others do not exist naturally. In theory one can turn a Kerr black hole into a Kerr-Newman black hole by shooting charged particles into it for a few months, say from an ion drive or a particle accelerator.

From the standpoint of extracting energy, the Kerr-Newman black hole is the best kind, since it has both spin and charge. In his The MacAndrews Chronicles, Charles Sheffield calls them "Kernels" actually "Ker-N-el", which is shorthand for Kerr-Newman black hole.

The spin acts as a super-duper flywheel. You can add or subtract spin energy to the Kerr-Newman black hole by using the Penrose process. Just don't extract all the spin, or the blasted thing turns into Reissner-Nordström black hole and becomes worthless. The attractive feature is that this process is far more efficient than nuclear fission or thermonuclear fusion. And the stored energy doesn't leak away either.

The electric charge is so you can hold the thing in place using electromagnetic fields. Otherwise there is no way to prevent it from wandering thorough your ship and gobbling it up.

The assumption is that Kerr-Newman black holes of manageable size can be found naturally in space, already spun up and full of energy. If not, they can serve as a fantastically efficient energy transport mechanism.

Primordial black holes
R(am)M(Mt)kT(GeV)P(PW)P/c2(g/sec)L(yrs)
0.160.10898.1551961400≲0.04
0.30.20252.3152717000≲0.12
0.60.40426.236740901
0.90.60617.416017803.5
1.00.67315.712914305
1.51.0110.556.262616—17
2.01.357.8531.334839—41
2.51.686.2819.822175—80
2.61.756.0418.320485—91
2.71.825.8216.918995—102
2.81.895.6115.7175106—114
2.91.955.4114.6163118—127
3.02.025.2313.7152130—140
5.83.912.713.5038.9941—1060
5.93.972.663.3737.5991—1117
6.04.042.623.2636.21042—1177
6.94.652.282.4327.11585—1814
7.04.712.242.3626.21655—1897
10.06.731.571.1112.34824—5763

Alert readers will have noticed the term "manageable size" above. It is impractical to use a black hole with a mass comparable to the Sun. Your ship would need an engine capable of moving something as massive as the Sun, and the gravitational attraction of the black hole would wreck the solar system. So you just use a smaller mass black hole, right? Naturally occurring small black holes are called "Primordial black holes."

Well, there is a problem with that. In 1975 legendary physicist Stephen Hawking discovered the shocking truth that black holes are not black (well, actually the initial suggestion was from Dr. Jacob Bekenstein). They emit Hawking radiation, for complicated reasons that are so complicated I'm not going to even try and explain them to you (go ask Google). The bottom line is that the smaller the mass of the black hole, the more deadly radiation it emits. The radiation will be the same as a "black body" with a temperature of:

6 × 10-8 / M kelvins

where "M" is the mass of the black hole where the mass of the Sun equals one. The Sun has a mass of about 1.9891 × 1030 kilograms.

Jim Wisniewski created an online Hawking Radiation Calculator to do the math for you.

In The McAndrew Chronicles Charles Sheffield hand-waved an imaginary force field that somehow contained all the deadly radiation. One also wonders if there is some way to utilze the radiation to generate power.

In the table:

  • R is the black hole's radius in attometers (units of one-quintillionth or 10-18 of a meter). A proton has a diameter of 1000 attometers.
  • M is the mass in millions of metric tons. One million metric tons is about the mass of three Empire State buildings.
  • kT is the Hawking temperature in GeV (units of one-billion Electron Volts).
  • P is the estimated total radiation output power in petawatts (units of one-quadrillion watts). 1—100 petawatts is the estimated total power output of a Kardashev type 1 civilization.
  • P/c2 is the estimated mass-leakage rate in grams per second.
  • L is the estimated life expectancy of the black hole in years. 0.04 years is about 15 days. 0.12 years is about 44 days.

Table is from Are Black Hole Starships Possible?, thanks to magic9mushroom for this link.

"I think Earth's worst problems are caused by the power shortage," he said. "That affects everything else. Why doesn't Earth use the kernels for power, the way that the USF does?"

"Too afraid of an accident," replied McAndrew. His irritation evaporated immediately at the mention of his specialty. "If the shields ever failed, you would have a Kerr-Newman black hole sitting there, pumping out a thousand megawatts—mostly as high-energy radiation and fast particles. Worse than that, it would pull in free charge and become electrically neutral. As soon as that happened, there'd be no way to hold it electromagnetically. It would sink down and orbit inside the Earth. We couldn't afford to have that happen."

"But couldn't we use smaller kernels on Earth?" asked Yifter. "They would be less dangerous."

McAndrew shook his head. "It doesn't work that way. The smaller the black hole, the higher the effective temperature and the faster it radiates. You'd be better off with a much more massive black hole. But then you've got the problem of supporting it against Earth's gravity. Even with the best electromagnetic control, anything that massive would sink down into the Earth."

"I suppose it wouldn't help to use a nonrotating, uncharged hole, either," said Yifter. "That might be easier to work with."

"A Schwarzschild hole?" McAndrew looked at him in disgust. "Now, Mr. Yifter, you know better than that." He grew eloquent. "A Schwarzschild hole gives you no control at all. You can't get a hold of it electromagnetically. It just sits there, spewing out energy all over the spectrum, and there's nothing you can do to change it—unless you want to charge it and spin it up, and make it into a kernel. With the kernels, now, you have control."

I tried to interrupt, but McAndrew was just getting warmed up. "A Schwarzschild hole is like a naked flame," he went on. "A caveman's device. A kernel is refined, it's controllable. You can spin it up and store energy, or you can use the ergosphere to pull energy out and spin it down. You can use the charge on it to move it about as you want. It's a real working instrument—not a bit of crudity from the Dark Ages."

from THE McANDREW CHRONICLES by Charles Sheffield (1983)

In this model of the interaction of a miniature black hole with the vacuum, the black hole emits radiation and particles, as though it had a temperature. The temperature would be inversely proportional to the mass of the black hole. A Sun-sized black hole is very cold, with a temperature of about a millionth of a degree above absolute zero. When the mass of the black hole is about a hundred billion tons (the mass of a large asteroid), the temperature is about a billion degrees.

(ed note: one hundred billion tons is 100,000 million tons or 5 × 10-17 solar masses. 6 × 10-8 / 5 × 10-17 = 1,200,000,000 Kelvin)

According to Donald Page, who carried out lengthy calculations on the subject, such a hole should emit radiation that consists of approximately 81% neutrinos, 17% photons, and 2% gravitons. When the mass becomes significantly less than a hundred billion tons, the temperature increases until the black hole is hot enough to emit electrons and positrons as well as radiation. When the mass becomes less than a billion tons (a one kilometer diameter asteroid), the temperature now approaches a trillion degrees and heavier particle pairs, like protons and neutrons are emitted. The size of a black hole with a mass of a billion tons is a little smaller than the nucleus of an atom. The black hole is now emitting 6000 megawatts of energy, the output of a large power plant. It is losing mass at such a prodigious rate that its lifetime is very short and it essentially "explodes" in a final burst of radiation and particles.

(ed note: one billon tons is 1000 million tons. An atomic nucleus is about 1750 to 15,000 attometers in diameter.)


If it turns out that small black holes really do exist, then I propose that we go out to the asteroid belt and mine the asteroids for the black holes that may be trapped in them. If a small black hole was in orbit around the Sun in the asteroid belt region, and it had the mass of an asteroid, it would be about the diameter of an atom. Despite its small size, the gravity field of the miniature black hole would be just as strong as the gravity field of an asteroid and if the miniature black hole came near another asteroid, the two would attract each other. Instead of colliding and fragmenting as asteroids do, however, the miniature black hole would just penetrate the surface of the regular asteroid and pass through to the other side. In the process of passing through, the miniature black hole would absorb a number of rock atoms, increasing its weight and slowing down slightly. An even more drastic slowing mechanism would be the tides from the miniature black hole. They would cause stresses in the rock around the line of penetration and fragment the rock out to a few micrometers away from its path through the asteroid. This would cause further slowing.

After bouncing back and forth through the normal matter asteroid many times, the miniature black hole would finally come to rest at the center of the asteroid. Now that it is not moving so rapidly past them, the miniature black hole could take time to absorb one atom after another into its atom-sized body until it had dug itself a tiny cavity at the center of the asteroid. With no more food available, it would stop eating, and sit there and glow warmly for a few million years. After years of glowing its substance away, it would get smaller. As it got smaller it would get hotter since the temperature rises as the mass decreases. Finally, the miniature black hole would get hot enough to melt the rock around it. Drops of melted rock would be pulled into the miniature black hole, adding to its mass. As the mass of the black hole increased, the temperature would decrease. The black hole would stop radiating, the melted rock inside the cavity would solidify, and the process would repeat itself many centuries later. Thus, although a miniature black hole left to itself has a lifetime that is less than the time since the Big Bang, there could be miniature black holes with the mass of an asteroid, being kept alive in the asteroid belt by a symbiotic interaction with an asteroid made of normal matter.

To find those asteroids that contain miniature black holes, you want to look for asteroids that have anomalously high temperatures, lots of recent fracture zones, and anomalously high density. Those with a suspiciously high average density have something very dense inside. To obtain a measure of the density, you need to measure the volume and the mass. It is easy enough to get an estimate of the volume of the host asteroid with three pictures taken from three different directions. It is difficult to measure the mass of an object in free fall. One way is to go up to it with a calibrated rocket engine and push it. Another is to land on it with a sensitive gravity meter. There is, however, a way to measure the mass of an object at a distance without going through the hazard of a rendezvous. To do this, you need to use a mass detector or gravity gradiometer.


Once you have found a suspiciously warm asteroid that seems awfully massive for its size, then to extract the miniature black hole, you give the surface of the asteroid a strong shove and push the asteroid out of the way. The asteroid will shift to a different orbit, and where the center of the asteroid used to be, you will find the miniature black hole. The black hole will be too small to see, but if you put an acoustic detector on the asteroid you will hear the asteroid complaining as the black hole comes to the surface. Once the black hole has left the surface you can monitor its position and determine its mass with a mass detector.


The next step in corralling the invisible black maverick is to put some electric charge on it. This means bombarding the position of the miniature black hole with a focused beam of ionized particles until the black hole has captured enough of them to have a significant charge to mass ratio. The upper limit will depend upon the energy of the ions. After the first ion is absorbed, the black hole will have a charge and will have a tendency to repel the next ion. Another upper limit to the amount of charge you can place on a black hole is the rate at which the charged black hole pulls opposite charges out of the surrounding space. You can keep these losses low, however, by surrounding the black hole with a metal shield.

Once a black hole is charged, you can apply forces to it with electric fields. If the charged black hole happens to be rotating, you are in luck, for then it will also have a magnetic field and you can also use magnetic fields to apply forces and torques. The coupling of the electric charge to the black hole is very strong—the black hole will not let go. You can now use strong electric or magnetic fields to pull on the black hole and take it anywhere you want to go.

from INDISTINGUISHABLE FROM MAGIC by Robert L. Forward (1995)
PERRY RHODAN SCHWARZSCHILD REACTORS

(ed note: for you Ugly Americans who have never heard of Perry Rhodan, this is a science fictional device)

Schwarzschild reactors have power output ten thousands time higher than a fusion reactor.

The reactor create a artificial pulsating micro black hole in size of one hundred nanometers. It shifts between being a black hole with event horizon and space time warp with no event horizon.

The black hole is fed with particle beam of ultra-catalyzed deuterium. Approximately 50% of deuterium is transformed into gamma-rays, the rays are collected by "super solar cells" and transformed into usable energy with an efficiency of 80%.

The other 50% of the deuterium is transformed into antimatter, swallowed by black hole (in space time warp mode) where it vanishes into the depths of hyperspace.

Michel Van (2015)
PERRY RHODAN NUGAS-BALL STORAGE TANK

(ed note: for you Ugly Americans who have never heard of Perry Rhodan, this is a science fictional device)

Humans found the Schwarzschild reactors performance to be disappointing. Only 50% deuterium into gamma-rays could be improved upon. Human scientists developed the NUGAS-Schwarzschild Reactors.

The principle remain almost the same.

However instead of the antimatter being discharged into hyperspace, it is directed into the path of a particle beam for mutual annihilation. Thus 100% of the deuterium is converted into gamma rays.

Due the higher pulse rate and antimatter annihilation, ultra catalyzed deuterium was unsuitable as fuel. Instead ionized hydrogen nucleons (protons) were subsituted. They are conpressed to a density of 3.5×107 kilograms per cubic meter to form the Nucleon Gas (NUGAS) fuel ball. The NUGAS fuel ball has a mass of 200,000 metric tons. It is surrounded by containment generators forming a reactor with a diameter of 12 meters.

NUGAS is also used as fuel for starship Puls proton beam engines, the successor to the older impuls engines.

Of course NUGAS is dangerous, but it gave the 1970s Perry Rhodan authors interesting plot complications (such as a NUG-ball in danger of losing its containment field). The technological levels in the Perry Rhodan universe eventually became too unbelievable, so in 2003 the authors "reset" it to tone everything down. Now NUGAS only compress to a density of 8.75×106 kg/m3, and have a mass of only 50,000 metric tons.

The idea of the Schwarzschild Reactor and the NUG version came from German science fiction author Kurt Mahn. He was a real life physcist who worked for Pratt & Whitney, Martin Marietta, and Harris Electronics. He wrote for Perry Rhodan from 1962 to 1969 and later from 1972 to 1993.

Michel Van (2015)

Atomic Rockets notices

This week's featured addition is SPIN POLARIZATION FOR FUSION PROPULSION

This week's featured addition is INsTAR

This week's featured addition is NTR ALTERNATIVES TO LIQUID HYDROGEN

Atomic Rockets

Support Atomic Rockets

Support Atomic Rockets on Patreon