Typically the percentage of spacecraft dry mass that is power systems is 28% for NASA vessels.
Spacecraft power systems have three subsystems:
- Power Generation/ Conversion: generating power
- Energy Storage: storing power for future use
- Power Management and Distribution (PMAD): routing the power to equipment that needs it
There are a couple of parameters used to rate power plant performance:
- Alpha : (kg/kW) power plant mass in kilograms divided by kilowatts of power. So if a solar power array had an alpha of 90, and you needed 150 kilowatts of output, the array would mass 90 * 150 = 13,500 kg or 13.5 metric tons
- Specific Power : (W/kg) watts of power divided by power plant mass in kilograms (i.e., (1 / alpha) * 1000)
- Specific Energy : (Wh/kg) watt-hours of energy divided by power plant mass in kilograms
- Energy Density : (Wh/m3) watt-hours of energy divided by power plant volume in cubic meters
NASA has a rather comprehensive report on various spacecraft power systems here . The executive summary states that currently available spacecraft power systems are "heavy, bulky, not efficient enough, and cannot function properly in some extreme environments."
Scroll to see rest of infographic
Energy Harvesting or energy scavenging is a pathetic "waste-not-want-not" strategy when you are desperate to squeeze every milliwatt of power out of your system. This includes waste engine heat (gradients), warm liquids, kinetic motion, vibration, and ambient radiation. This is generally used for such things as enabling power for remote sensors in places where no electricity is readily available.
The general term is "chemical power generation", which means power generated by chemical reactions. This is most commonly seen in the form of fuel cells, though occasionally there are applications like the hydrazine-fired gas turbines that the Space Shuttle uses to hydraulically actuate thrust vector vanes.
Fuel cells basically consume hydrogen and oxygen to produce low voltage electricity and water. They are quite popular in NASA manned spacecraft designs. Each PC17C fuel-cell stack in the Shuttle Orbiter has an alpha of about 10 kg/kW, specific power 98 W/kg, have a total mass of 122 kg, have an output of 12 kW, and produces about 2.7 kilowatt-hours per kilogram of hydrogen+oxygen consumed (about 70% efficient). They also have a service life of under 5000 hours. The water output can be used in the life support system.
Different applications will require fuel cells with different optimizations. Some will need high specific power (200 to 400 W/kg), some will need long service life (greater than 10,000 hours), and others will require high efficiency (greater than 80% efficient).
Back in the 1950's, on artist conceptions of space stations and space craft, one would sometimes see what looked like mirrored troughs. These were "mercury boilers", a crude method of harnessing solar energy in the days before photovoltaics. The troughs had a parabolic cross section and focused the sunlight on tubes that heated streams of mercury. The hot mercury was then used in turbines to generate electricity.
These gradually vanished from artist conceptions and were replaced by nuclear reactors. Generally in the form of a long framework boom sticking out of the hub, with a radiation shadow shield big enough to shadown the wheel.
The technical name is "solar dynamic power", where mirrors concentrate sunlight on a boiler. "Solar static power" is Photovoltaic solar cells.
Such systems are generally useful for power needs between 20 kW and 100 kW. Below 20 kW a solar cell panel is better. Above 100 kW a nuclear fission reactor is better.
They typically have an alpha of 250 to 170, a collector size of 130 to 150 watts per square meter at Terra orbit (i.e., about 11% efficient), and a radiator size of 140 to 200 watts per square meter.
At Terra's distance to the sun, solar energy is about 1366 watts per square meter. This energy can be converted into electricity by photovoltaics. Of course the power density goes down the farther from the Sun the power array is located.
The technical name is "solar static power", where photovoltaic solar cells convert sunlight into electricity. "Solar dynamic power" is where mirrors concentrate sunlight on a boiler.
Solar power arrays have an alpha ranging from 100 to 1.4 kg/kW. Body-mounted rigid panels an alpha of 16 kg/kW while flexible deployable arrays have an alpha of 10 kg/kW. Most NASA ships use multi-junction solar cells which have an efficiency of 29%, but a few used silicon cells with an efficiency of 15%. Most NASA arrays output from 0.5 to 30 kW.
Some researchers (Dhere, Ghongadi, Pandit, Jahagirdar, Scheiman) have claimed to have achieved 1.4 kg/kW in the lab by using Culn1-×Ga×S2 thin films on titanium foil. Rob Davidoff is of the opinion that a practical design with rigging and everything will be closer to 4 kg/kW, but that is still almost three times better than conventional solar arrays.
In 2015 researchers at Georgia Institute of Technology demonstrated a photovoltaic cell using an optical rectenna. They estimate that such rectennas could have a power conversion efficiency of up to 40% and a lower cost than silicon cells. No word on the alpha, though.
The International Space Station uses 14.5% efficient large-area silicon cells. Each of the Solar Array Wings are 34 m (112 ft) long by 12 m (39 ft) wide, and are capable of generating nearly 32.8 kW of DC power. 19% efficiency is available with gallium arsenide (GaAs) cells, and efficiencies as high as 30% have been demonstrated in the laboratory.
To power a ion drive or other electric propulsion system with solar cells is going to require an array capable of high voltage (300 to 1000 volts), high power (greater than 100 kW), and a low alpha (2 to 1 kg/kW).
Obviously the array works best when oriented face-on to the sun, and unshadowed. As the angle increases the available power decreases in proportion to the cosine of the angle (e.g., if the array was 75° away from face-on, its power output would be Cos(75°) = 0.2588 or 26% of maximum). Solar cells also gradually degrade due to radiation exposure (say, from 8% to 17% power loss over a five year period if the panel is inhabiting the deadly Van Allen radiation belt, much less if it is in free space).
Typically solar power arrays are used to charge batteries (so you have power when in the shadow of a planet). You should have an array output of 20% higher voltage than the battery voltage or the batteries will not reliably charge up. Sometimes the array is used instead to run a regenerative fuel cell.
Like all non-coherent light, solar energy is subject to the inverse square law. If you double the distance to the light source, the intensity drops by 1/4.
Translation: if you travel farther from the sun than Terra orbit, the solar array will produce less electricity. Contrawise if you travel closer to the sun the array will produce more electricity. This is why some science fiction novels have huge solar energy farms on Mercury; to produce commercial quantities of antimatter, beamed power propulsion networks, and other power-hungry operations.
As a general rule:
Es = 1366 * (1 / Ds2)
- Es = available solar energy (watts per square meter)
- Ds = distance from the Sun (astronomical units)
- 1366 = Solar Constant (watts per square meter)
Remember that you divide distance in meters by 1.496e11 in order to obtain astronomical units. Divide distance in kilometers by 1.496e8 to obtain astronomical units
This means that the available solar energy around Saturn is a pitiful 15 W/m2. That's available energy, if you tried harvesting it with the 29% efficient ISS solar cell arrays you will be lucky to get 4.4 W/m2. Which is why the Cassini probe used RTGs.
Special high efficiency cells are needed in order to harvest worthwhile amounts of solar energy in low intensity/low temperature conditions (LILT). Which is defined as the solar array located at 3 AU from Sol or farther (i.e., about 150 watts per square meter or less, one-ninth the energy available at Terra's orbit).
A more exotic variant on solar cells is the beamed power concept. This is where the spacecraft has a solar cell array, but back at home in orbit around Terra (or Mercury) is a a huge power plant and a huge laser. The laser is fired at the solar cell array, thus energizing it. It is essentially an astronomically long electrical extension cord constructed of laser light. It shares the low mass advantage of a solar powered array. It has an advantage over solar power that the energy per square meter of array can be much larger.
It has the disadvantage that the spacecraft is utterly at the mercy of whoever is currently running the laser battery. It has the further disadvantage of being frowned upon by the military, since they take a dim view of weapons-grade lasers in civilian hands. Unless the military owned the power lasers in the first place.
Radioisotope thermoelectric generators (RTG) are slugs of radioisotopes (usually plutonium-238 in the form of plutonium oxide) that heat up due to nuclear decay, and surrounded by thermocouples to turn the heat gradient into electricity (it does NOT turn the heat into electricity, that's why the RTG has heat radiator fins on it.).
There are engineering reasons that currently make it impractical to design an individual RTG that produces more than one kilowatt. However nothing is stopping you from using several RTGs in your power room. Engineers are trying to figure out how to construct a ten kilowatt RTG.
Current NASA RTGs have a useful lifespan of over 30 years.
Currently RTGs have an alpha of about 200 kg/kW (though there is a design on the drawing board that should get about 100 kg/kW). Efficiency is about 6%. The near term goal is to develop an RTG with an alpha of 100 to 60 kg/kW and an efficiency of 15 to 20%.
An RTG based on a Stirling cycle instead of thermionics might be able to reach an efficiency of 35%. Since they would need less Pu-238 for the same electrical output, a Sterling RTG would have only 0.66 the mass of an equivalent thermocouples RTG. However NASA is skittish about Sterling RTGs since unlike conventional ones, Sterlings have moving parts. Which are yet another possible point of failure on prolonged space missions.
Nuclear weapons-grade plutonium-239 cannot be used in RTGs. Non-fissionable plutonium-238 has a half life of 85 years, i.e., the power output will drop to one half after 85 years. To calculate power decay:
P1 = P0 * 0.9919^Y
- P1 = current power output (watts)
- P0 = power output when RTG was constructed (watts)
- Y = years since RTG was constructed.
Wolfgang Weisselberg points out that this equation just measures the drop in the power output of the slug of plutonium. In the real world, the thermocouples will deteriorate under the constant radioactive bombardment, which will reduce the actual electrical power output even further. Looking at the RTGs on NASA's Voyager space probe, it appears that the thermocouples deteriorate at roughly the same rate as the plutonium.
Plutonium-238 has a specific power of 0.56 watts/gm or 560 watts per kilogram, so in theory all you would need is 470 / 560 = 0.84 kilograms. Alas, the thermoelectric generator which converts the thermal energy to electric energy has an efficiency of only 6%. If the thermoelectric efficiency is 6%, the plutonium RTG has an effective specific power of 560 x 0.06 = 30 watts per kilogram 238Pu (0.033 kilogram 238Pu per watt or 33 kgP/kW). This means you will need an entire 15.5 kilos of plutonium to produce 470 watts.
This is why a Sterling-based RTG with an efficience of 35% is so attractive.
Many RTG fuels would require less than 25 mm of lead shielding to control unwanted radiation. Americium-241 would need about 18 mm worth of lead shielding. And Plutonium-238 needs less than 2.5 mm, and in many cases no shielding is needed as the casing itself is adequate. Plutonium is the radioisotope of choice but it is hard to come by (due to nuclear proliferation fears). Americium is more readily available but lower performance.
At the time of this writing (2014) NASA has a severe Pu-238 problem. NASA only has about 16 kilograms left, you need about 4 kg per RTG, and nobody is making any more. They were purchasing it from the Russian Mayak nuclear industrial complex for $45,000 per ounce, but in 2009 the Russians refused to sell any more.
NASA is "rattled" because they need the Pu-238 for many upcoming missions, they do not have enough on had, and Congressional funding for creating Pu-238 manufacturing have been predictably sporadic and unreliable.
The European Space Agency (ESA) has no access to Pu-238 or RTGs at all. This is why their Philae space probe failed when it could not get solar power. The ESA is accepting the lesser of two evils and is investing in the design and construction of Americium-241 RTGs. Am-241 is expensive, but at least it is available.
|Fuel region||157 kg|
|Heat pipes||117 kg|
|Reactor control||33 kg|
|Other support||32 kg|
|Total Reactor mass||493 kg|
For a great in-depth analysis of nuclear power for space applications, I refer you to Andrew Presby's engineer degree thesis: Thermophotovoltaic Energy Conversion in Space Nuclear Reactor Power Systems . There is a much older document with some interesting designs here .
As far as the nuclear fuel required, the amount is incredibly tiny. Which in this case means burning a microscopic 0.01 grams of nuclear fuel per second to produce a whopping 1000 megawatts! That's the theoretical maximum of course, you can find more details here.
Nuclear fission reactors are about an alpha of 18 kg/kW. However, Los Alamos labs had an amazing one megawatt Heat Pipe reactor that was only 493 kg (alpha of 0.493 kg/kW):
Fission reactors are attractive since they have an incredibly high fuel density, they don't care how far you are from the Sun nor if it is obscured, and they have power output that makes an RTG look like a stale flashlight battery. They are not commonly used by NASA due to the hysterical reaction of US citizens when they hear the "N" word. Off the top of my head the only nuclear powered NASA probe currently in operation is the Curiosity Mars Rover; and that is an RTG, not an actual nuclear reactor.
For a space probe a reactor in the 0.5 to 5 kW power range would be a useful size, 10 to 100 kW is good for surface and robotic missions, and megawatt size is needed for nuclear electric propulsion.
Here is a commentary on figuring the mass of the reactor of a nuclear thermal rocket by somebody who goes by the handle Tremolo:
New reactors that have never been activated are not particularly radioactive. Of course, once they are turned on, they are intensely radioactive while generating electricity. And after they are turned off, there is some residual radiation due to neutron activation of the reactor structure.
How much deadly radiation does an operating reactor spew out? That is complicated, but Anthony Jackson has a quick-and-dirty first order approximation:
r = (0.5*kW) / (d2)
- r = radiation dose (Sieverts per second)
- kW = power production of the reactor core, which will be greater than the power output of the reactor due to reactor inefficiency (kilowatts)
- d = distance from the reactor (meters)
This equation assumes that a 1 kW reactor puts out an additional 1.26 kW in penetrating radiation (mostly neutrons) with an average penetration (1/e) of 20 g/cm2.
As a side note, in 1950's era SF novels, nuclear fission reactors are commonly referred to as "atomic piles." This is because the very first reactor ever made was basically a precision assembled brick-by-brick pile of graphite blocks, uranium fuel elements, and cadmium control rods.
Nuclear Thermal Rockets are basically nuclear reactors with a thrust nozzle on the bottom. A concept called Bimodal NTR allows one to tap the reactor for power. This has other advantages. Since the reactor is running warm at a low level all the time (instead of just while thrusting) it doesn't have to be pre-heated if you have a burn coming up. This reduces thermal stress, and reduces the number of thermal cyclings the reactor will have to endure over the mission. It also allows for a quick engine start in case of emergency.
In the real world, during times of disaster, US Navy submarines have plugged their nuclear reactors into the local utility grid. This supplies emergency electricity when the municipal power plant is out. In the science fiction world, a grounded spacecraft with a bimodal NTR could provide the same service.
This is from A Half-Gigawatt Space Power System using Dusty Plasma Fission Fragment Reactor (2016)
Rodney Clarke and Robert Sheldon were working on a fission-fragment rocket engine when they noticed a useful side-benefit.
There is a remarkably efficient (84%) electrical power plant called a Magnetohydrodynamic Generator (MHD generator). They also have the virtue of being able to operate at high temperatures, and have no moving parts (which reduces the maintenance required and raises reliability). A conventional electrical power generator spins a conducting copper wire coil inside a magnetic field to create electricity. An MHD generator replaces the solid copper coil with a fast moving jet of conducting plasma.
Because many designs for fusion rocket engines and fusion power plants produce fast moving jets of plasma, MHD generators were the perfect match. Ground based power plants just sprayed the jet of fusion plasma into the MHD.
Fusion spacecraft could be bimodal. An MHD generator could be installed in the exhaust nozzle to constantly bleed off some of the thrust power in order to make electricity, this was popular with inertial confinement fusion which need to recharge huge capacitors before each fusion pulse. Alternatively the MHD generator could be installed at the opposite end of the fusion reaction chamber. The fusion plasma goes down out the exhaust nozzle for thrust, but it can be diverted upwards into an MHD generator for electrical power.
Finally getting to the point, Clarke and Sheldon realized that a fission-fragment rocket engine also produces a jet of plasma. Therefore, it too can be bimodal with the addition of an MHD generator.
Cutting to the chase, they would have a jaw-dropping specific power of 11 kWe/kg! The rough design they made had a power output of 448 megawatts and a total mass of 38,430 kg (38 metric tons).
|Power Output||448 MW|
|Specific Power||11 kWe/kg|
|U235 Fuel||4.27 kg|
|Am242m Fuel||1.25 kg|
|Moderator Heat Radiator||28,000 kg|
|Generator Heat Radiator||1,000 kg|
This design combines open-cycle gas-core nuclear thermal rockets with the sophistication of a Magnetohydrodynamic (MHD) generator. OCGC NTRs can put out much more thermal energy than a solid core reactor, since the latter has to worry about melting. And MDH generator not only have great efficiencies and no moving parts, their core element is a stream of hot gas. The hotter the better.
A fusion reactor would produce energy from thermonuclear fusion instead of nuclear fission. Unfortunately scientist have yet to create a fusion reactor that can reach the "break-even" point (where is actually produces more energy than it consumes), so it is anybody's guess what the value for alpha will be.
The two main approaches are magnetic confinement and inertial confinement. The third method, gravitational confinement, is only found in the cores of stars and among civilizations that have mastered gravidic technology. The current wild card is the Polywell device which is a type of inertial electrostatic confinement fusion generator.
Fusion is even more efficient than fission. You need to burn 0.01 grams of fission fuel per second to generate 1000 megawatts. But among the most promising fusion fuels, they start at 0.01 grams per second, and can get as low as 0.001 grams per second. You can find more details here.
In science fiction, a fusion reactor is commonly called a "fusactor".
Lattice Confinement Fusion is a theoretical way of creating fusion inside a metal alloy doped with deuterium. No, it ain't cold fusion, not even close. And not just because the majority of scientist find the evidence for cold fusion to be about as convincing as data from the Flat Earth Society. Cold fusion features two electrodes in some heavy water, all quiet like. Lattice confinement fusion has an erbium-titanium alloy savagely bombarded with x-rays from an electron particle accelerator.
As a power source, it is probably more like a strong RTG than anything else.
This is where the spacecraft receives its power not from an on-board generator but instead from a laser or maser beam sent from a remote space station. This is a popular option for spacecraft using propulsion systems that require lots of electricity but have low thrusts.
For instance, an ion drive has great specific impulse and exhaust velocity, but very low thrust. If the spacecraft has to power the ion drive using a heavy nuclear reactor with lead radiation shielding, the mass of the spacecraft will increase to the point where its acceleration could be beaten by a drugged snail. But with beamed power the power generator adds zero mass to the spacecraft, since the heavy generator is on the remote station instead of onboard and laser photons weigh nothing.
The drawback includes the distance decrease in power due to diffraction, and the fact that the spacecraft is at the mercy of whoever is running the remote power station. Also maneuvers must be carefully coordinated with the remote station, or they will have difficulty keeping the beam aimed at the ship.
The other drawback is the laser beam is also a strategic weapons-grade laser. The astromilitary (if any) take a very dim view of weapons-grade laser cannon in the hands of civilians. The beamed power equipment may be under the close (armed) supervision of the Laser Guard.
Any Star Trek fan knows that the Starship Enterprise runs on antimatter. The old term is "contra-terrene", "C-T", or "Seetee". At 100% of the matter-antimatter mass converted into energy, it would seem to be the ultimate power source. The operative word in this case is "seem".
What is not as well known is that unless the situation is non-standard, antimatter is not a fuel. It is an energy transport mechanism. Unless there exist "antimatter mines", antimatter is an energy transport mechanism, not a fuel. In Star Trek, I believe they found drifts of antimatter in deep space. An antimatter source was also featured in the Sten series. In real life, astronomers haven't seen many matter-antimatter explosions. Well, they've seen a few 511 keV gamma rays (the signature of electron-positron antimatter annihilation), but they've all been from thousands of light years away and most seem to be associated with large black holes. If they are antimatter mines, they are most inconveniently located. In Jack Williamson's novels Seetee Ship and Seetee Shock there exist commercially useful chunks of antimatter in the asteroid belt. However, if this was actually true, I think astronomers would have noticed all the antimatter explosions detonating in the belt by now.
And antimatter is a very inefficient energy transport mechanism. Current particle accelerators have an abysmal 0.000002% efficiency in converting electricity into antimatter (I don't care what you saw in the movie Angels and Demons). The late Dr. Robert Forward says this is because nuclear physicist are not engineers, an engineer might manage to increase the efficiency to something approaching 0.01% (one one-hundredth of one percent). Which is still pretty lousy, it means for every megawatt of electricity you pump in to the antimatter-maker you would only obtain enough antimatter to create a mere 100 pathetic watts. The theoretical maximum is 50% due to the pesky Law of Baryon Number Conservation (which demands that when turning energy into matter, equal amounts of matter and antimatter must be created).
In Charles Pellegrino and George Zebrowski novel The Killing Star they deal with this by having the Earth government plate the entire equatorial surface of the planet Mercury with solar power arrays, generating enough energy to produce a few kilograms of antimatter a year. They do this with von Neumann machines, of course.
Of course the other major draw-back is the difficulty of carrying the blasted stuff. If it comes into contact with the matter walls of the fuel tank the resulting explosion will make a nuclear detonation seem like a wet fire-cracker. Researchers are still working on a practical method of containment. In Michael McCollum's novel Thunder Strike! antimatter is transported in torus-shaped magnetic traps, it is used to alter the orbits of asteroids ("torus" is a fancy word for "donut").
Converting the energy from antimatter annihilation into electricity is also not very easy.
The electrons and positrons mutually annihilate into gamma rays. However, since an electron has 1/1836 the mass of a proton, and since matter usually contains about 2.5 protons or other nucleons for each electron, the energy contribution from electron-positron annihilation is negligible.
For every five proton-antiproton annihilations, two neutral pions are produced and three charged pions are produced (that is, 40% neutral pions and 60% charged pions). The neutral pions almost immediately decay into gamma rays. The charged pions (with about 94% the speed of light) will travel 21 meters before decaying into muons. The muons will then travel an additional two kilometers before decaying into electrons and positrons.
This means your power converter needs a component that will transform gamma rays into electricity, and a second component that has to attempt to extract the kinetic energy out of the charged pions and convert that into electricity. The bottom line is that there is no way you are going to get 100% of the annihilation energy converted into electricity. Exactly what percentage is likely achievable is a question above my pay grade.
The main virtue of antimatter power is that it is incredibly concentrated, which drastically reduces the mass of antimatter fuel required for a given application. And mass is always a problem in spacecraft design, so any way of reducing it is welcome.
The man known as magic9mushroom drew my attention to the fact that Dr. James Bickford has identified a sort of antimatter mine where antimatter can be collected by magnetic scoops (be sure to read the comment section), but the amounts are exceedingly small. He foresees using tiny amounts of antimatter for applications such as catalyzing sub-critical nuclear reactions, instead of just using raw antimatter for fuel. His report is here.
Dr. Bickford noted that high-energy galactic cosmic rays (GCR) create antimatter via "pair production" when they impact the upper atmospheres of planets or the interstellar medium. Planets with strong magnetic fields enhance antimatter production. One would think that Jupiter would be the best at producing antimatter, but alas its field is so strong that it prevents GCR from impacting the Jovian atmosphere at all. As it turns out, the planet with the most intense antimatter belt is Earth, while the planet with the most total antimatter in their belt is Saturn (mostly due to the rings). Saturn receives almost 250 micrograms of antimatter a year from the ring system. Please note that this is a renewable resource.
Dr. Bickford calculates that the plasma magnet scoop can collect antimatter about five orders of magnitude more cost effective than generating the stuff with particle accelerators.
Keep in mind that the quantities are very small. Around Earth the described system will collect about 25 nanograms per day, and can store up to 110 nanograms. That has about the same energy content as half a fluid ounce of gasoline, which ain't much. However, such tiny amounts of antimatter can catalyze tremendous amounts of energy from sub-critical fissionable fuel, which would give you the power of nuclear fission without requiring an entire wastefully massive nuclear reactor. Alternatively, one can harness the power of nuclear fusion with Antimatter-Catalyzed Micro-Fission/Fusion or Antimatter-Initiated Microfusion. Dr. Bickford describes a mission where an unmanned probe orbits Earth long enough to gather enough antimatter to travel to Saturn. There it can gather a larger amount of antimatter, and embark on a probe mission to the outer planets.
Vacuum energy or zero-point energy is one of those pie-in-the-sky concepts that sounds too good to be true, and is based on the weirdness of quantum mechanics. The zero-point energy is the lowest energy state of any quantum mechanical system, but because quantum systems are fond of being deliberately annoying their actual energy level fluctuates above the zero-point. Vacuum energy is the zero-point energy of all the fields of space.
Naturally quite a few people wondered if there was a way to harvest all this free energy.
Currently the only suggested method was proposed by the late Dr. Robert Forward, the science fiction writer's friend (hard-SF writers would do well to pick up a copy of Forward's Indistinguishable From Magic). His paper is Extracting Electrical Energy From the Vacuum by Cohesion of Charged Foliated Conductors, and can be read here.
How much energy are we talking about? Nobody knows. Estimates based on the upper limit of the cosmological constant put it at a pathetic 10-9 joules per cubic meter (about 1/10th the energy of a single cosmic-ray photon). On the other tentacle estimates based on Lorentz covariance and with the magnitude of the Planck constant put it at a jaw-dropping 10113 joules per cubic meter (about 3 quintillion-septillion times more energy than the Big Bang). A range between 10-9 and 10113 is another way of saying "nobody knows, especially if they tell you they know".
Vacuum energy was used in All the Colors of the Vacuum by Charles Sheffield, Encounter with Tiber by Buzz Aldrin John Barnes, and The Songs of Distant Earth by Sir Arthur C. Clarke.
Arguably the Grand Unified Theory (GUT) drives and GUTships in Stephen Baxter's Xeelee novels are also a species of vacuum energy power sources.
Primordial black holes R(am) M(Mt) kT(GeV) P(PW) P/c2(g/sec) L(yrs) 0.16 0.108 98.1 5519 61400 ≲0.04 0.3 0.202 52.3 1527 17000 ≲0.12 0.6 0.404 26.2 367 4090 1 0.9 0.606 17.4 160 1780 3.5 1.0 0.673 15.7 129 1430 5 1.5 1.01 10.5 56.2 626 16—17 2.0 1.35 7.85 31.3 348 39—41 2.5 1.68 6.28 19.8 221 75—80 2.6 1.75 6.04 18.3 204 85—91 2.7 1.82 5.82 16.9 189 95—102 2.8 1.89 5.61 15.7 175 106—114 2.9 1.95 5.41 14.6 163 118—127 3.0 2.02 5.23 13.7 152 130—140 5.8 3.91 2.71 3.50 38.9 941—1060 5.9 3.97 2.66 3.37 37.5 991—1117 6.0 4.04 2.62 3.26 36.2 1042—1177 6.9 4.65 2.28 2.43 27.1 1585—1814 7.0 4.71 2.24 2.36 26.2 1655—1897 10.0 6.73 1.57 1.11 12.3 4824—5763
Artificial Singularity Power (ASP) engines generate energy through the evaporation of modest sized (108-1011 kg) black holes created through artificial means. This paper discusses the design and potential advantages of such systems for powering large space colonies, terraforming planets, and propelling starships. The possibility of detecting advanced extraterrestrial civilizations via the optical signature of ASP systems is examined. Speculation as to possible cosmological consequences of widespread employment of ASP engines is considered.
According to a theory advanced by Stephen Hawking  in 1974, black holes evaporate at a rate given by:
tev = (5120π)tP(m/mP)3 (1)
where tev is the time it takes for the black hole to evaporate, tP is the Planck time (5.39e-44 s), m is the mass of the black hole in kilograms, and mP is the Planck mass (2.18e-8 kg) 
Hawking considered the case of black holes formed by the collapse of stars, which need to be at least ~3 solar masses to occur naturally. For such a black hole, equation 1 yields an evaporation time of 5e68 years, far longer than the expected life of the universe. In fact, evaporation would never happen, because the black hole would gain energy, and thus mass, by drawing in cosmic background radiation at a rate faster than its own insignificant rate of radiated power.
However it can be seen from examining equation (1) that the evaporation rate goes inversely with the cube of singularity, which means that the emitted power (=mc2/tev) goes inverse with the square of the mass. Thus if the singularity could be made small enough, very large amounts of power could theoretically be produced.
This possibility was quickly grasped by science fiction writers, and such propulsion systems were included by Arthur C. Clarke in his 1976 novel Imperial Earth  and Charles Sheffield in his 1978 short story “Killing Vector.” 
Such systems did not receive serious technical analysis however, until 2009, when it was examined by Louis Crane and Shawn Westmoreland, both then of Kansas State University, in their seminal paper “Are Black Hole Starships Possible?” 
In their paper, Crane and Westmoreland focused on the idea of using small artificial black holes powerful enough to drive a starship to interstellar-class velocities yet long-lived enough to last the voyage. They identified a “sweet spot” for such “Black Hole Starships” (BHS) with masses on the order of 2×109 kg, which they said would have lifetimes on order of 130 years, yet yield power of about 13,700 TW. They proposed to use some kind of parabolic reflector to reflect this radiation, resulting in a photon rocket. The ideal thrust T of a rocket with jet power P and exhaust velocity v is given by:
T = 2P/v (2)
So with T = 13,700 TW and v=c = 3e8 m/s, the thrust would be 8.6e7 N. Assuming that the payload spacecraft had a mass of 1e9 kg, this would accelerate the ship at a rate of a=8.6e7/3e9 = 2.8e-2 m/s2. Accelerating at this rate, such a ship would reach about 30% the speed of light in 100 years.
There are a number of problems with this scheme. In the first place, the claimed acceleration is on the low side. Furthermore their math appears to be incorrect. A 2e9 kg singularity would only generate about 270 TW, or 1/50th as much as their estimate, reducing thrust by a factor of 50 (although it would last about 20,000 years). These problems could be readily remedied, however, by using a smaller singularity and a smaller ship. For example a singularity with a mass of 2e8 kg would produce a power of 26,900 TW. Assuming a ship with a mass of 1e8 kg, an acceleration of 0.6 m/s2 could be achieved, allowing 60% the speed of light to be achieved in 10 years. The singularity would only have a lifetime of 21 years. However it could be maintained by being constantly fed mass at a rate of about 0.33 kg/s.
A bigger problem is that a 1e9 kg singularity would produce radiation with a characteristic temperature of 9 GeV, increasing in inverse proportion to the singularity mass. So for example a 1e8 kg singularity would produce gamma rays with energies of 90 GeV (i.e. for Temperature, T, in electron volts, T = 9e18/m.) There is no known way to reflect such high energy photons. So at this point the parabolic reflector required for the black hole starship photon engine is science fiction.
Yet another problem is the manufacture of the black hole. Crane and Westmoreland suggest that it could be done using converging gamma ray lasers. To make a 1e9 kg unit, they suggested a “high-efficiency square solar panel a few hundred km on each side, in a circular orbit about the sun at a distance of 1,000,000 km” to provide the necessary energy. A rough calculation indicates the implied power of this system from this specification is on the order of 106 TW, or about 100,000 times the current rate used by human civilization. As an alternative construction technique, they also suggest accelerating large masses to relativistic velocities and then colliding them. The density of these masses would be multiplied both by relativistic mass increase and length contraction. However the energy required to do this would still equal the combined masses times the speed of light squared. While this technique would eliminate the need for giant gamma ray lasers, the same huge power requirement would still present itself.
In what follows, we will examine possible solutions for the above identified problems.
Advanced Singularity Engines
In MKS units, equation (1) can be rewritten as:
tev = 8.37e-15 m3 (3)
This implies that the power, P, in Watts, emitted by the singularity is given by:
P = 1.08e33/m2 (4)
The results of these two equations are shown in Fig. 1.
No credible concept is available to enable a lightweight parabolic reflector of the sort needed to enable the Black Hole Starship. But we can propose a powerful and potentially very useful system by dropping the requirement for starship-relevant thrust to weight ratios. Instead let us consider the use of ASP engines to create an artificial sun.
Consider a 1e8 kg ASP engine. As shown in Fig 1, it would produce a power of 1.08e8 Gigawatts. Such an engine, if left along, would only have a lifetime of 2.65 years, but it could be maintained by a constant feed of about 3 kg/s of mass. We can’t reflect its radiation, but we can absorb it with a sufficiently thick material screen. So let’s surround it with a spherical shell of graphite with a radius of 40 km and a thickness of 1.5 m. At a distance of 40 km, the intensity of the radiation will be about 5 MW/m2, which the graphite sphere can radiate into space with a black body temperature of 3000 K. This is about the same temperature as the surface of a type M red dwarf star. We estimate that graphite has an attenuation length for high energy gamma rays of about 15 cm, so that 1.5 m of graphite (equivalent shielding to 5 m of water or half the Earth’s atmosphere) will attenuate the gamma radiation by ten factors of e, or 20,000. The light will then radiate out further, dropping in intensity with the square of the distance, reaching typical Earth sunlight intensities of 1 kW/m2 at a distance of about 3000 km from the center.
The mass of the artificial star will be about 1014 kg (that’s the mass of the graphite shell, compared to which the singularity is insignificant.). As large as this is, however, it is still tiny compared to that of a planet, or even the Earth’s Moon (which is 7.35e22 kg). So, no planet would orbit such a little star. Instead, if we wanted to terraform a cold world, we would put the mini-star in orbit around it.
The preferable orbital altitude of the ASP mini-star of 3000 km altitude in the above cited example was dictated by the power level of the singularity. Such a unit would be sufficient to provide all the light and heat necessary to terraform an otherwise sunless planet the size of Mars. Lower power units incorporating larger singularities but much smaller graphite shells are also feasible. (Shell mass is proportional to system power.) These are illustrated in Table 1.
The high-powered units listed in Table 1 with singularity masses in the 1e8 to 1e9 kg range are suitable to serve as mini-suns orbiting planets, moons or asteroids, with the characteristic radius of such terraforming candidates being about the same as the indicated orbital altitude. The larger units, with lower power and singularity masses above 1e10 kg are more appropriate for space colonies.
Consider an ASP mini-sun with a singularity mass of 3.16e10 kg positioned in the center of a cylinder with a radius of 10 km and a length of 20 km. The cylinder is rotating at a rate of 0.0316 radians per second, which provides it with 1 g or artificial gravity. Let’s say the cylinder is made of material with an areal density of 1000 kg per square meter. In this case it will experience an outward pressure of 104 pascals, or about 1.47 psi, due to outward acceleration. If the cylinder were made of solid Kevlar (density = 1000 kg/m3) it would be about 1 m thick. So the hoop stress on it would be 1.47*(10,000)/1 = 14,700 psi, which is less than a tenth the yield stress of Kevlar. Or put another way, 10 cm of Kevlar would do the job of carrying the hoop stress, and the rest of mass load could be anything, including habitations. If the whole interior of the cylinder were covered with photovoltaic panels with an efficiency of 10 percent, 100 GWe of power would be available for use of the inhabitants of the space colony, which would have an area of 1,256 square kilometers. The mini-sun powering it would have a lifetime of 84 million years, without refueling. Much larger space colonies (i.e, with radii over ~100 km) would not be possible however, unless stronger materials become available, as the hoop stress would become too great.
Both of these approaches seem potentially viable in principle. However we note that the space colony approach cited requires a singularity some 300 times more massive than the approach of putting a 1e8 kg mini-sun in orbit around a planet, which yields 4π(3000)2 = 100 million square kilometers of habitable area, or about 80,000 times as much land. Furthermore, the planet comes with vast supplies of matter of every type, whereas the space colony needs to import everything.
Reducing the size of the required singularity by a factor of 10 from 1e9 to 1e8 kg improves feasibility of the ASP concept somewhat, but we need to do much better. Fortunately there is a way to do so.
If we examine equation (3), we can see that the expected lifetime of a 1000 kg singularity would be about 8.37 x 10-6 s. In this amount of time, light can travel about 250 m. and an object traveling at half the speed of light 125 m. If a sphere with a radius of 125 m were filled with steel it would contain about 8 x 1010 kg, or about 100 times what we need for our 1e8 kg ASP singularity. In fact, it turns out that if the initial singularity is as small as about 200 kg, and fired into a mass of steel, it will gain mass much faster than it losses it, and eventually grow into a singularity as massive as the steel provided.
By using this technique we can reduce the amount of energy required to form the required singularity by about 7 orders of magnitude compared to Crane and Westmoreland’s estimate. So instead of needing a 106 TW system, a 100 GW gamma ray laser array might do the trick. Alternatively, accelerating two 200 kg masses to near light speed would require 3.6e7 TJ, or 10,000 TW-hours of energy. This is about the energy humanity currently uses in 20 days. We still don’t know how to do it, but reducing the scale of the required operation by a factor of 10 million certainly helps.
We now return to the subject of ASP starships. In the absence of a gamma ray reflector, we are left with using solid material to absorb the gamma rays and other energetic particles and re-radiate their energy as heat. (Using magnetic fields to try to contain and reflect GeV-class charged particles that form a portion of the Hawking radiation won’t work because the required fields would be too strong and too extensive, and the magnets to generate them would be exposed to massive heating by gamma radiation.)
Fortunately, we don’t need to absorb all the radiation in the absorber/reflector, we only need to absorb enough to get it hot. So let’s say that we position a graphite hemispherical screen to one side of a 1e8 kg ASP singularity, but instead of making it 1.5 m thick, we make it 0.75 mm thick. At that thickness it will only absorb about 5 percent of the radiation that hits it, the rest will pass right through. So we have 5e6 GW of useful energy, which we want to reduce to 5 MW/m2 in order for the graphite to be kept at ~3000 K where it can survive. The radius will be about 9 km, and the mass of the graphite hemisphere will be about 6e8 kg. A thin solar sail like parabolic reflector with an area 50 times as great and the carbon hemisphere but a thickness 1/500th (i.e. 1.5 microns) as great would be positioned in front of the hemisphere, adding another 0.6 e8 kg to the system, which then plus the singularity and the 1e8 kg ship might be 7.6e8 kg in all. Thrust will be 0.67e8 N, so the ship would accelerate at a speed of 0.67/7.6 = 0.09 m/s2, allowing it to reach 10 percent the speed of light in about 11 years.
Going much faster would become increasingly difficult, because using only 5% of the energy of the singularity mass would give the system an effective exhaust velocity of about 0.22 c. Higher efficiencies might be possible if a significant fraction of the Hawking radiation came off as charged particles, allowing a thin thermal screen to capture a larger fraction of the total available energy. In this case, effective exhaust velocity would go as c times the square root of the achieved energy efficiency. But sticking with our 5% efficiency, if we wanted to reach 0.22 c we could, but we would require a mass ratio of 2.7, meaning we would need about 1.5e9 kg of propellant to feed into the ASP engine, whose mass would decrease our average acceleration by about a factor of two over the burn, meaning we would take about 40 years to reach 20 percent the speed of light.
The above analysis suggests that if ASP technology is possible, using it to terraform cold planets with orbital mini-suns will be the preferred approach. Orbiting (possibly isolated) cold worlds at distances of thousands of kilometers, and possessing 3000 K type M red dwarf star spectra, potentially with gamma radiation in excess of normal stellar expectations, it is possible that such objects could be detectable.
Indeed, one of the primary reasons to speculate on the design of ASP engines right now is to try to identify their likely signature. We are far away from being able to build such things. But the human race is only a few hundred thousand years old, and human civilization is just a few thousand years. In 1905 the revolutionary HMS Dreadnought was launched, displacing 18,000 tons. Today ships 5 times that size are common. So it is hardly unthinkable that in a century or two we will have spacecraft in the million ton (109 kg) class. Advanced extraterrestrial civilizations may have reached our current technological level millions or even billions of years ago. So they have had plenty of time to develop every conceivable technology. If we can think it, they can build it, and if doing so would offer them major advantages, they probably have. Thus, looking for large energetic artifacts such as Dyson Spheres , starships [7,8], or terraformed planets  is potentially a promising way to carry out the SETI search, as unlike radio SETI, it requires no mutual understanding of communication conventions. Given the capabilities the ASP technology would offer any species seeking to expand it prospects by illuminating and terraforming numerous new worlds, such systems may actually be quite common.
ASP starships are also feasible and might be detectable as well. However the durations of starship flights would be measured in decades or centuries, while terraformed worlds could be perpetual. Furthermore, once settled, trade between solar systems could much more readily be accomplished by the exchange of intellectual property via radio than by physical transport. As a result, the amount of flight traffic will be limited. In addition, there could be opportunities for employment of many ASP terraforming engines within a single solar system. For example, within our own solar system there are seven worlds of planetary size (Mars, Ceres, Ganymede, Calisto, Titan, Triton, and Pluto) whose terraforming could be enhanced or enabled by ASP systems, not to mention hundreds of smaller but still considerable moons and asteroids, and potentially thousands of artificial space colonies as well. Therefore the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. It would therefore appear advantageous to focus the ASP SETI search effort on such systems.
Proxima Centauri is a type M red dwarf with a surface temperature of 3000 K. It therefore has a black body spectrum similar to that of the 3000 K graphite shell of our proposed ASP mini-sun discussed above. The difference however is that it has about 1 million times the power, so that an ASP engine placed 4.2 light years (Proxima Centauri’s distance) from Earth would have the visual brightness as a star like Proxima Centauri positioned 4,200 light years away. Put another way, Proxima Centauri has a visual magnitude of 11. It takes 5 magnitudes to equal a 100 fold drop in power, so our ASP engine would have a visual magnitude of 26 at 4.2 light years, and magnitude 31 at 42 light years. The limit of optical detection of the Hubble Space Telescope is magnitude 31. So HST would be able to see our proposed ASP engine out to a distance of about 50 light years, within which there are some 1,500 stellar systems.
Consequently ASP engines may already have been imaged by Hubble, appearing on photographs as unremarkable dim objects assumed to be far away. These should be subjected to study to see if any of them exhibit parallax. If they do, this would show that they are actually nearby objects of much lower power than stars. Further evidence of artificial origin could be provided if they were found to exhibit a periodic Doppler shift, as would occur if they were in orbit around a planetary body. An anomalous gamma ray signature could be present as well.
I suggest we have a look.
One of the great mysteries of science is why the laws of the universe are so friendly to life. Indeed, it can be readily shown that if any one of most of the twenty or so apparently arbitrary fundamental constants of nature differed from their actual value by even a small amount, life would be impossible . Some have attempted to answer this conundrum by claiming that there is nothing to be explained because there are an infinite number of universes; we just happen to live in the odd one where life is possible. This multiverse theory answer is absurd, as it could just as well be used to avoid explaining anything. For example take the questions, why did the Titanic sink/it snow heavily last winter/the sun rise this morning/the moon form/the chicken cross the road? These can all also be answered by saying “no reason, it other universes they didn’t.” The Anthropic Principle reply, to the effect of “clearly they had to, or you wouldn’t be asking the question” is equally useless.
Clearly a better explanation is required. One attempt at such an actual causal theory was put forth circa 1992 by physicist Lee Smolin , who says that daughter universes are formed by black holes created within mother universes. This has a ring of truth to it, because a universe, like a black hole, is something that you can’t leave. Well, says Smolin, in that case, since black holes are formed from collapsed stars, the universes that have the most stars will have the most progeny. So to have progeny a universe must have physical laws that allow for the creation of stars. This would narrow the permissible range of the fundamental constants by quite a bit. Furthermore, let’s say that daughter universes have physical laws that are close to, but slightly varied from that of their mother universes. In that case, a kind of statistical natural selection would occur, overwhelmingly favoring the prevalence of star-friendly physical laws as one generation of universes follows another.
But the laws of the universe don’t merely favor stars, they favor life, which certainly requires stars, but also planets, water, organic and redox chemistry, and a whole lot more. Smolin’s theory gets us physical laws friendly to stars. How do we get to life?
Reviewing an early draft of Smolin’s book in 1994, Crane offered the suggestion  that if advanced civilizations make black holes, they also make universes, and therefore universes that create advanced civilizations would have much more progeny than those that merely make stars. Thus the black hole origin theory would explain why the laws of the universe are not only friendly to life, but the development of intelligence and advanced technology as well. Universes creates life because life creates universes. This result is consistent with complexity theory, which holds that if A is necessary to B, then B has a role in causing A.
These are very interesting speculations. So let us ask, what would we see if our universe was created as a Smolin black hole, and how might we differentiate between a natural star collapse or ASP engine origin? From the above discussion, it should be clear that if someone created an ASP engine, it would be advantageous for them to initially create a small singularity, then grow it to its design size by adding mass at a faster rate than it evaporates, and then, once it reaches its design size, maintain it by continuing to add mass at a constant rate equal to the evaporation rate. In contrast, if it were formed via the natural collapse of a star it would start out with a given amount of mass that would remain fixed thereafter.
So let’s say our universe is, as Smolin says, a black hole. Available astronomical observations show that it is expanding, at a velocity that appears to be close to the speed of light. Certainly the observable universe is expanding at the speed of light.
Now a black hole has an escape velocity equal to the speed of light. So for such a universe
c2/2 = GM/R (5)
Where G is the universal gravitational constant, c is the speed of light in vacuum, M is the mass of the universe, and R is the radius of the universe.
If we assume that G and c are constant, R is expanding at the speed of light, and τ is the age of the universe, then:
R = cτ (6)
Combining (5) and (6), we have.
M/τ = (Rc2/2G)(c/R) = c3/2G (7)
This implies that the mass of such a universe would be growing at a constant rate. Contrary to the classic Hoyle continuous creation theory, however, which postulated that mass creation would lead to a steady state universe featuring constant density for all eternity, this universe would have a big bang event with density decreasing afterwards inversely with the square of time.
Now the Planck mass, mp, is given by:
mp = (hc/2πG)½ (8)
And the Planck time, tp, is given by:
tp = (hG/2πc5)½ (9)
If we divide equation (8) by equation (9) we find:
mp/tp = c3/G (10)
If we compare equation (10) to equation (7) we see that:
M/τ = ½(mp/tp) (11)
So the rate at which the mass of such a universe would increase equals exactly ½ Planck mass per Planck time.
Comparison with Observational Astronomy
In MKS units, G = 6.674e-11, c= 3e+8, so:
M/τ= c3/2G = 2.02277 e+35 kg/s. (12)
For comparison, the mass of the Sun is 1.989+30 kg. So this is saying that the mass of the universe would be increasing at a rate of about 100,000 Suns per second.
Our universe is believed to be about 13 billion years, or 4e+17 seconds old. The Milky Way galaxy has a mass of about 1 trillion Suns. So this is saying that the mass of the universe should be about 40 billion Milky Way galaxies. Astronomers estimate that there are 100 to 200 billion galaxies, but most are smaller than the Milky Way. So this number is in general agreement with what we see.
According to this estimate, the total mass of the universe M, is given by:
M = (2e+35)(4e+17) = 8e+52 kg. (13)
This number is well known. It is the critical mass required to make our universe “flat.” It should be clear, however, that when the universe was half as old, with half its current diameter, this number would have needed to be half as great. Therefore, if the criteria is that such a universe mass always be critical for flatness, and not just critical right now, then its mass must be increasing linearly with time.
These are very curious results. Black holes, the expanding universe, and the constancy of the speed of light are results of relativity theory. Planck masses and Planck times relate to quantum mechanics. Observational astronomy provides data from telescopes. It is striking that these three separate approaches to knowledge should provide convergent results.
This analysis does require that mass be continually added to the universe at a constant rate, exactly as would occur in the case of an ASP engine during steady-state operation. It differs however in that in an ASP engine, the total mass only increases during the singularity’s buildup period. During steady state operation mass addition would be balanced by mass evaporation. How these processes would appear to the inhabitants of an ASP universe is unclear. Also unclear is how the inhabitants of any Smolinian black hole universe could perceive it as rapidly expanding. Perhaps the distance, mass, time, and other metrics inside a black hole universe could be very different from those of its parent universe, allowing it to appear vast and expanding to its inhabitants while looking small and finite to outside observers. One possibility is that space inside a black hole is transformed, in a three dimensional manner analogous to a ω = 1/z transformation in the complex plane, so that the point at the center becomes a sphere at infinity. In this case mass coming into the singularity universe from its perimeter would appear to the singularity’s inhabitants as matter/energy radiating outward from its center.
Is there a model that can reconcile all the observations of modern astronomy with those that would be obtained by observers inside either a natural black hole or ASP universe? Speculation on this matter by scientists and science fiction writers with the required physics background would be welcome .
We find that ASP engines appear to be theoretically possible, and could offer great benefits to advanced spacefaring civilizations. Particularly interesting is their potential use as artificial suns to enable terraforming of unlimited numbers of cold worlds. ASP engines could also be used to enable interstellar colonization missions. However the number of ASP terraforming engines in operation in the universe at any one time most likely far exceeds those being used for starship propulsion. Such engines would have optical signatures similar to M-dwarfs, but would differ in that they would be much smaller in power than any natural M star, and hence have to be much closer to exhibit the same apparent luminosity. In addition they would move in orbit around a planetary body, thereby displaying a periodic Doppler shift, and could have an anomalous additional gamma ray component to their spectra. An ASP engine of the type discussed would be detectable by the Hubble Space Telescope at distances as much as 50 light years, within which there are approximately 1,500 stellar systems. Their images may therefore already be present in libraries of telescopic images as unremarkable dim objects, whose artificial nature would be indicated if they were found to display parallax. It is therefore recommended that such a study be implemented.
As for cosmological implications, the combination of the attractiveness of ASP engines with Smolinian natural selection theory does provide a potential causal mechanism that could explain the fine tuning of the universe for life. Whether our own universe could have been created in such a manner remains a subject for further investigation.
1. Hawking, S. W. (1974). “Black hole explosions?” Nature 248(5443): 30–31. https://ui.adsabs.harvard.edu/abs/1974Natur.248…30H/abstract
2. Hawking Radiation, Wikipedia https://en.wikipedia.org/wiki/Hawking_radiation accessed September 22, 2019.
3. Arthur C. Clarke, Imperial Earth, Harcourt Brace and Jovanovich, New York, 1976.
4. Charles Sheffield, “Killing Vector,” in Galaxy, March 1978.
5. Louis Crane and Shawn Westmoreland, “Are Black Hole Starships Possible?” 2009, 2019. https://arxiv.org/pdf/0908.1803.pdf accessed September 24.
6. Freeman Dyson, “The Search for Extraterrestrial Technology,” in Selected Papers of Freeman Dyson with Commentary, Providence, American Mathematical Society. Pp. 557-571, 1996.
7. Robert Zubrin, “Detection of Extraterrestrial Civilizations via the Spectral Signature of Advanced Interstellar Spacecraft,” in Progress in the Search for Extraterrestrial Life: Proceedings of the 1993 Bioastronomy Symposium, Santa Cruz, CA, August 16-20 1993.
8. Crane, “Searching for Extraterrestrial Civilizations Using Gamma Ray Telescopes,” available at https://arxiv.org/abs/1902.09985.
9. Robert Zubrin, The Case for Space: How the Revolution in Spaceflight Opens Up a Future of Limitless Possibility, Prometheus Books, Amherst, NY, 2019.
10. Paul Davies, The Accidental Universe, Cambridge University Press, Cambridge, 1982
11. Lee Smolin, The Life of the Cosmos, Oxford University Press, NY, 1997.
12. Louis Crane, “Possible Implications of the Quantum Theory of Gravity: An Introduction to the Meduso-Anthropic principle,” 1994. https://arxiv.org/PS_cache/hep-th/pdf/9402/9402104v1.pdf
13. I provided a light hearted explanation in my science fiction satire The Holy Land (Polaris Books, 2003) where the advanced extraterrestrial priestess (3rd Class) Aurora mocks the theory of the expanding universe held by the Earthling Hamilton. “Don’t be ridiculous. The universe isn’t expanding. That’s obviously physically impossible. It only appears to be expanding because everything in it is shrinking. What silly ideas you Earthlings have.” In a more serious vein, the late physicist Robert Forward worked out what life might be like on a neutron star in his extraordinary novel Dragon’s Egg (Ballantine Books, 1980.) A similar effort to describe life on the inside of a black hole universe could be well worthwhile. Any takers?
Ladderdown transmutation reactors are fringe science invented by Wil McCarthy for his science fiction novel Bloom. It is certainly nothing we will be capable of making anytime soon, but it will take somebody more knowledgeable than me to prove it impossible. Offhand I do not see anything that straight out violates the laws of physics. Ladderdown is unobtainium, not handwavium
Basically ladderdown reactors obtain their energy the same way nuclear fission does: by splitting atomic nuclei and releasing the binding energy. It is just that the ladderdown reactor can work with any element heavier than Iron-56, and the splitting does not release any neutrons or gamma radiation. Nuclear fission only works with fission fuel, and any anti-nuclear activist can tell you horror stories about the dire radiation produced.
Apparently ladderdown reactors remove protons and neutrons from the fuel material one at a time, by quantum tunneling, quietly. Unlike fission, which shoots neutrons like bullets at nuclei, shattering the nucleus into sprays of radiation and exploding fission products.
As with fission the laddered-down nuclei releases the difference in binding energy and moves down the periodic table. The process comes to a screeching halt when the fuel transmutes into Iron-56, since it is at the basin of the binding energy curve (i.e., Iron-56 has the highest binding energy per nucleon). In the novel iron is the most worthless element for this reason, and so is used for cheap building material.
Ladderdown reactors can also take fuel elements that are lighter than Iron-56, and add protons and neutrons one at a time, to make heavier elements (called "ladderup"). This is the ladderdown version of fusion, except it will work with any element lighter than Iron-56 and there is no nasty radiation produced. This is handy because laddering down heavy elements produces lots of protons as a by product, which can be laddered up into Iron-56.
Late breaking news: as it turns out, Nickel-62 has microscopically more binding energy per nucleon than Iron-56. Actually not so much "late-breaking" as "totally ignored". This has been known since the 1960s.
Mass Converters are fringe science. You see them in novels like Heinlein's Farmer in the Sky, James P. Hogan's Voyage from Yesteryear, and Vonda McIntyre's Star Trek II: The Wrath of Khan. You load the hopper with anything made of matter (rocks, raw sewage, dead bodies, toxic waste, old AOL CD-ROMS, belly-button lint, etc.) and electricity comes out the other end. In the appendix to the current edition of Farmer in the Sky Dr. Jim Woosley is of the opinion that the closest scientific theory that would allow such a thing is Preon theory.
Preon theory was all the rage back in the 1980's, but it seems to have fallen into disfavor nowadays (due to the unfortunate fact that the Standard Model gives better predictions, and absolutely no evidence of preons has ever been observed). Current nuclear physics holds that all subatomic particles are either leptons or composed of groups of quarks. The developers of Preon theory thought that two classes of elementary particles does not sound very elementary at all. So they theorized that both leptons and quarks are themselves composed of smaller particles, pre-quarks or "preons". This would have many advantages.
One of the most complete Preon theory was Dr. Haim Harari's Rishon model (1979). The point of interest for our purposes is that the sub-components of electrons, neutrons, protons, and electron anti-neutrinos contain precisely enough rishon-antirishon pairs to completely annihilate. All matter is composed of electrons, neutrons, and protons. Thus it is theoretically possible in some yet as undiscovered way to cause these rishons and antirishons to mutually annihilate and thus convert matter into energy.
Both James P. Hogan and Vonda McIntyre new a good thing when they saw it, and quickly incorporated it into their novels.
Back about the same time, when I was a young man, I thought I had come up with a theoretical way to make a mass converter. Unsurprisingly it wouldn't work. My idea was to use a portion of antimatter as a catalyst. You load in the matter, and from the antimatter reserve you inject enough antimatter to convert all the matter into energy. Then feed half (or a bit more than half depending upon efficiency) into your patented Antimatter-Makertm and replenish the antimatter reserve. The end result was you fed in matter, the energy of said matter comes out, and the antimatter enables the reaction but comes out unchanged (i.e., the definition of a "catalyst").
Problem #1 was that pesky Law of Baryon Number Conservation, which would force the Antimatter-Maker to produce equal amounts of matter and antimatter. Which would mean that either your antimatter reserve would gradually be consumed or there would be no remaining energy to be output, thus ruining the entire idea. Drat!
Problem #2 is that while electron-positron annihilation produces 100% of the energy in the form of gamma-rays, proton-antiproton annihilation produces 70% as energy and 30% as worthless muons and neutrinos.
Pity, it was such a nice idea too. If you were hard up for input matter, you could divert energy away from the Antimatter-maker and towards the output. Your antimatter reserve would diminish, but if you found more matter later you could run the mass converter and divert more energy into the Antimatter-maker. This would replenish your reserve. And if you somehow totally ran out of antimatter, if another friendly ship came by it could "jump-start" you by connecting its mass converter energy output directly to your Antimatter-maker and run it until you had a good reserve.
Basically this is when a ship lands at the spaceport, hooks up to the port's electrical umbilical cable, pays the service charge, and powers down the ship's internal nuclear reactor. This reduces the ship's consumption of reactor fuel. There might be port anti-idling laws requiring the use of shorepower if the ship's internal power source gives off air pollution, radiation, or whatever.
If the ship insists on using its internal nuclear reactor, it may require a coolant connection from the spaceport. The ship's reactor radiators may not work very well when landed, or the spaceport may not want megawatts of thermal plumes blowing around the landing pads.
On December 17 1929, the city of Tacoma Washington was suffering from a drought. The city's hydroelectric dams did not have enough water to generate electricity. Tacoma was about to go dark. They begged President Herbert Hoover to help.
It just so happened that the aircraft carrier USS Lexington (CV 2) was being refurbished at the Puget Sound Navy Yard, right near Tacoma. The Lexington was dispatched to Tacoma’s Baker Dock, was hooked up to the city's power grid, and used steam turbines to generate power. The ship stayed at Backer Dock from December 17, 1929 to January 16, 1930; feeding the city 4 million kilowatt-hours of electricity. By mid January enough snow had melted to power the hydroelectric dams and the Lexington could disconnect. The city was saved.
This is called "Ship to Shore Power for Humanitarian Purposes".
Dave Hinerman noted that in the 1950s and 60s the US Navy had at least one destroyer outfitted with additional generators to provide emergency power to shore installations and cities.
Note that if the ship supplying the power is using a nuclear reactor, it will suck cold water from the sea as reactor coolant. If the ship plant is using coal, oil, or other petrochemical, it will just need a smokestack. Meaning that a hypothetical nuclear-powered aircraft will have a hard time supplying a land-locked city with power unless there is a nearby lake to supply coolant.
In a RocketPunk future, any spacecraft with an onboard power source that does not depend upon the engines thrusting can do the same trick. This can come in handy if the planetary spaceport got hit by a hurricane, a Belter asteroid colony suffers a failure of their nuclear reactor, or if a settlement suffering from the Long Night does not have the spare part or the repair skills to fix their power plant. A visiting spacecraft can save the day. This would be a useful capability to build into a Long Night Insurance Ship.
This can be tricky if the spacecraft's power reactor relies upon a heat radiator for cooling the reactor. Liquid droplet radiators do not like being used on a planet with significant gravity, and they are problematic on a planet with a windy atmosphere. The radiant heat can also damage anything that gets too close: other spacecraft, space stations, careless astronauts, industrial installations, etc. Landed on a planet with an atmosphere, plumes of very hot air blowing around can be a problem.
A spacecraft with a Bimodal Nuclear Engine is especially suited to do Ship to Shore, since it is already set up to produce electricity.
Stanley Borowski Bimodal NTR spacecraft has much the same problem. Under thrust the nuclear reactor is cranking out a whopping 335 megawatts. But when used bimodally as a power generator it is throttled down to only 110 kilowatts. This is mostly due to dealing with the waste heat.
Under thrust, the waste heat from the reactor at 335 megawatts is gotten rid of by the magic of open-cycle cooling. This adds zero penalty-mass to the ship's structure.
Lamentably, when used as a power generator, open-cycle cooling cannot be used. Instead, a physical heat radiator is employed. Which does add penalty-mass. The Borowski cut the power budget to the bone with a measly 110 kilowatts, but even that needed 71 square meters of radiator.
The equivalent of Dr. Bradbeer's turboelectric design would be Borowski's Bimodal Hybrid NTR NEP. This has NTR rocket engine cooled with open-cycle cooling, but they are only used for thrust-critical parts of the mission. For the rest it has lots and lots of heat radiators and a large electrical power plant used to feed an ion drive.
While the author was training to become a US Navy Enlisted Reactor Operator, qualified operators repeatedly stated, “This sub could power a small city.” In a similar vein, it was proposed that US Navy ships should provide electrical power during the response to Hurricane Katrina in New Orleans. These off the cuff assessments prompted a more realistic assessment: is it feasible to power facilities ashore from a ship?
During World War II, there were seven destroyer escorts converted into Turbo-Electric Generators (TEG) specifically for the purpose of providing electrical power to shore facilities. They were the Donnell (DE-56), Foss (DE-59), Whitehurst (DE-634), Wiseman (DE-667), Marsh (DE-699), and two British lend-lease ships; Spragge (K-572, ex-DE-563) and Hotham (K-583 ex-DE-574). Data for these ships are sparse in general.
Consider the Wiseman, for which more data is available. This ship had oil fired boilers producing steam to turn turbine generators which in turn powered electric propulsion motors. This electric ship configuration is optimal for providing electric power ashore since all the power in the ship is already being converted to electric. The Wiseman had transformers and cable reels topside to deliver power at high voltages over relatively long distances. Wiseman powered the city of Manila during WWII and the port of Mason during the Korean War. Wiseman delivered 5,806,000 kWh to Manila over five and a half months, giving an average generation capability greater than 1.4 MW.
The US Army also used ship to shore power to power remote stations. One notable case is that of the Sturgis/MH-1A, A WWII era Liberty ship equipped with a nuclear power plant used to provide power to the Panama Canal Zone from 1968 to 19753. The MH-1A power plant on the Sturgis generated 10MW electrical power which allowed the canal locks to be operated more frequently.
Thus history shows that ships can provide power to the shore, if only in limited amounts, and using specialized ships.
There are currently no US Navy ships designed specifically to provide power to the shore. They are however designed to be powered from the shore and this capability could be used to act as a power source. For example, the author’s ship, USS Key West (SSN-722), a Los Angeles class nuclear powered fast attack submarine, once received ‘shore power’ from a destroyer while moored alongside the destroyer anchored off Monaco. This allowed the labor intensive nuclear reactor plant on the submarine to be shutdown. The gas turbine generators on the destroyer require fewer watchstanders and had to run to power the destroyer’s own loads. This anecdotal evidence shows that power can be made to flow from at least one US Navy ship and conceivably could flow from most.
The capability to provide power can be evaluated by considering the ship as a load and assuming that whatever power it can draw, it can deliver. For USN ships smaller than carriers and amphibious ships, the unit of measure is the single shore power cable. These cables are rated to 400A at 450V 3 phase or 0.312MW assuming a unity power factor4. Submarines and surface combatant ships typically can connect up to eight cables, yielding a total of 2.5MW. For a carrier, the shore power supply must deliver 21MVA at 4160V5. Amphibious ships are presumably between these values. Without significant changes, current Navy ships could theoretically supply 2.5 to 21MW of electrical power to the shore. This again assumes generation capacity to match the ship as a load and also assumes this capacity is above that required to power the ship and its power plant.
What if more power is needed? More ships could be used, but there is also more power onboard each ship. This other power is the power for propulsion. Remembering the Wiseman, it was an ideal ship for supplying power because all the power of the boilerswas first converted to electricity by turbine generators. Today’s Navy ships are not ‘all electric’ and so a significant portion of the power onboard is dedicated to propulsion and is often coupled directly to the propeller shafts. Steam plants fired by oil or nuclear reactors offer a sort of middle ground. While the propulsion turbines are coupled to the shafts, the steam can be diverted upstream. In this scheme, high pressure steam would be piped out of the ship and used to drive a larger turbine generator. The spent low energy steam and condensate would then be piped back into the ship and into the condensate system, closing the loop. Piping is not as forgiving or flexible as cabling, this would not be a trivial set up and is probably impossible for a submarine.
Considering the publicly available shaft horsepower ratings for the ship as the electrical power available, it is clear that much more power is in the hulls than is available through the shore power connections.
Power Available From Steam Plant Ships
Ship type Total
Fast Attack Submarine 35,000 26 Large Deck Amphib 70,000 52 Carrier 260,000 194
Note that most surface combatants are driven by gas turbine or diesel engines and their propulsion power cannot feasibly be extracted from the ship.
The Navy is driving toward all electric ships in a case of history repeating. This is driven by the desire to access propulsion power to supply combat systems. As stated previously, all electric ships are ideal for providing power to shore since all their power is first converted to electricity. The future destroyer DD(X) is being designed as an all electric ship with two Rolls-Royce MT-30 gas turbine generators producing a total of 78MW of electrical power.
The future carrier CVN(X) will be nuclear powered and have a steam plant but will also have increased electrical generation (104MVA) to support launching planes using electrical power.
Neither DD(X) nor CVN(X) is designed to deliver power outside the hull, but it would be easier to export it as electrical current than as steam. Loads
To investigate the claim of powering a small city, a ‘rough order of magnitude’ (ROM) calculation was performed. The author’s most recent electrical utility bill was used to determine the average power of a house, and then this number was used to determine how many houses could be powered. The bill was for 1203kWh over a 29 day period giving an average load of 1.7kW. Again, this is a ROM calculation and does not incorporate seasonal variations in power use nor the likelihood of reduced use in an emergency situation.
Using the existing shore to ship power capability, the submarine can power 1,500 nominal homes: more of a town than a small city. The carrier can power 12,000 houses and that is a small city.
Powering Houses with
Existing Ships Equipment
Ship type Shore
Houses Submarine 2.5 1,500 Carrier 21 12,000
If the steam plant of an amphib or a carrier were modified to increase electrical generation to match propulsion power, many more houses could be powered, equivalent to a medium city based on population only.
Powering Houses with Steam Plants
Ship type Steam
Houses Amphib 52 31,000 Carrier 190 110,000
Lastly, if all the generation capability of future ship classes could be made available external to the hull, a large population could be supplied.
Powering Houses with Future Ships
Ship type Generation
Houses DD(X) 78 46,000 CVN(X) 104 61,000
The US Navy is not chartered to act as a power utility, they are not likely to power the shore except at forward military or disaster locations. In these cases, residential housing is not likely to be the first load supplied. Instead, hospitals and other vital infrastructure are likely to receive priority. This prioritization is important since a single hospital can be a significant load. Based on one report discussing emergency generation installation, a value of 2MW per hospital was determined.
Using shore power, a submarine or surface combatant can power one hospital with a small surplus. This undermines the claim for a small city since few loads will be powered after the hospital. A carrier can power ten and a half hospitals, likely allowing some residential power after the vital infrastructure is supplied.
Often the power plant generates more power than is currently needed. Spacecraft cannot afford to throw the excess power away, it has to be stored for later use. This is analogous to Terran solar power plants, they don't work at night so you have to store some power by day.
There are a couple of instances where people make the mistake of labeling something a "power source" when actually it is an "energy transport mechanism." The most common example is hydrogen. Let me explain.
In the so-called "hydrogen economy", proponents point out how hydrogen is a "green" fuel, unlike nasty petroleum or gasoline. Burn gasoline and in addition to energy you also produce toxic air pollution. Burn hydrogen and the only additional product is pure water.
The problem is they are calling the hydrogen a fuel, which it isn't.
While there do exist petroleum wells, there ain't no such thing as a hydrogen well. You can't find hydrogen just lying around somewhere, the stuff is far too reactive. Hydrogen has to be generated by some other process, which consumes energy (such as electrolysing water using electricity generated by a coal-fired power plant). Not to mention the energy cost of compressing the hydrogen into liquid, transporting the liquid hydrogen in a power-hungry cryogenically cooled tank, and the power required to burn it and harvest electricity.
This is why hydrogen is not a fuel, it is an energy transport mechanism. It is basically being used to transport the energy from the coal-fired power plant into the hydrogen burning automobile. Or part of the energy, since these things are never 100% efficient.
In essence, the hydrogen is fulling much the same role as the copper power lines leading from a power plant to a residential home. It is transporting the energy from the plant to the home. Or you can look at the hydrogen as sort of a rechargable battery, for example as used in a regenerative fuel cell. But one with rather poor efficiency.
The main example from science fiction is antimatter "fuel." Unless the science fiction universe contains antimatter mines, it is an energy transport mechanism with a truly ugly efficency.
What is needed are so-called "secondary" batteries, commonly known as "rechargable" batteries. If the batteries are not rechargable then they are worthless for power storage. As you probably already figured out, "primary" batteries are the non-rechargable kind; like the ones you use in your flashlight until they go dead, then throw into the garbage.
Current rechargable batteries are heavy, bulky, vulnerable to the space environment, and have a risk of bursting into flame. Just ask anybody who had their laptop computer unexpectedly do an impression of an incindiary grenade.
Nickle-Cadmium and Nickle-Hydrogen rechargables have a specific energy of 24 to 35 Wh/kg (0.086 to 0.13 MJ/kg), an energy density of 0.01 to 0.08 Wh/m3, and an operating temperature range of -5 to 30°C. They have a service life of more than 50,000 recharge cylces, and a mission life of more than 10 years. Their drawbacks are being heavy, bulky, and a limited operationg temperature range.
Lithium-Ion rechargables have a specfic energy of 100 Wh/kg (0.36 MJ/kg), an energy density of 0.25 Wh/m3, and an operating temperature range of -20 to 30°C. They have a service life of about 400 recharge cylces, and a mission life of about 2 years. Their drawbacks are the pathetic service and mission life.
A flywheel is a rotating mechanical device that is used to store rotational energy. In a clever "two-functions for the mass-price of one" bargain a flyweel can also be used a a momentum wheel for attitude control. NASA adores these bargains because every gram counts.
Flywheels have a theoretical maximum specific energy of 2,700 Wh/kg (9.7 MJ/kg). They can quickly deliver their energy, can be fully discharged repetedly without harm, and have the lowest self-discharge rate of any known electrical storage system. NASA is not currently using flywheels, though they did have a prototype for the ISS that had a specific energy of 30 Wh/kg (0.11 MJ/kg).
A "regenerative" or "reverse" fuel cell is one that saves the water output, and uses a secondary power source (such as a solar power array) to run an electrolysers to split the water back into oxygen and hydrogen. This is only worth while if the mass of the secondary power source is low compared to the mass of the water. But it is attractive since most life support systems are already going to include electrolysers anyway.
In essence the secondary power source is creating fuel-cell fuel as a kind of battery to store power. It is just that a fuel cell is required to extract the power from the "battery."
Currently there exist no regenerative fuel cells that are space-rated. The current goal is for such a cell with a specific energy of up to 1,500 Wh/kg (5.4 MJ/kg), a charge/discharge efficiency up to 70%, and a service life of up to 10,000 hours.
Specific energy 4–40 kJ/kg
Energy density less than 40 kJ / L Specific power 10–100,000 kW/kg Charge/discharge
95% Self-discharge rate >0% at 4 K
100% at 140 K
Cycle durability Unlimited cycles
Superconducting Magnetic Energy Storage (SMES) systems store energy in the magnetic field created by the flow of direct current in a superconducting coil which has been cryogenically cooled to a temperature below its superconducting critical temperature.
A typical SMES system includes three parts: superconducting coil, power conditioning system and cryogenically cooled refrigerator. Once the superconducting coil is charged, the current will not decay and the magnetic energy can be stored indefinitely.
The stored energy can be released back to the network by discharging the coil. The power conditioning system uses an inverter/rectifier to transform alternating current (AC) power to direct current or convert DC back to AC power. The inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems are highly efficient; the round-trip efficiency is greater than 95%.
Due to the energy requirements of refrigeration and the high cost of superconducting wire, SMES is currently used for short duration energy storage. Therefore, SMES is most commonly devoted to improving power quality.
Low-temperature versus high-temperature superconductors
Under steady state conditions and in the superconducting state, the coil resistance is negligible. However, the refrigerator necessary to keep the superconductor cool requires electric power and this refrigeration energy must be considered when evaluating the efficiency of SMES as an energy storage device.
Although the high-temperature superconductor (HTSC) has higher critical temperature, flux lattice melting takes place in moderate magnetic fields around a temperature lower than this critical temperature. The heat loads that must be removed by the cooling system include conduction through the support system, radiation from warmer to colder surfaces, AC losses in the conductor (during charge and discharge), and losses from the cold–to-warm power leads that connect the cold coil to the power conditioning system. Conduction and radiation losses are minimized by proper design of thermal surfaces. Lead losses can be minimized by good design of the leads. AC losses depend on the design of the conductor, the duty cycle of the device and the power rating.
The refrigeration requirements for HTSC and low-temperature superconductor (LTSC) toroidal coils for the baseline temperatures of 77 K, 20 K, and 4.2 K, increases in that order. The refrigeration requirements here is defined as electrical power to operate the refrigeration system. As the stored energy increases by a factor of 100, refrigeration cost only goes up by a factor of 20. Also, the savings in refrigeration for an HTSC system is larger (by 60% to 70%) than for an LTSC systems.
The popular conception of a black hole is that it sucks everything in, and nothing gets out. However, it is theoretically possible to extract energy from a black hole, for certain values of "from."
And by the way, there appears to be no truth to the rumor that Russian astrophysicists use a different term, since "black hole" in the Russian language has a scatological meaning. It's an urban legend, I don't care what you read in Dragon's Egg.
For an incredibly dense object with an escape velocity higher than the speed of light which warps the very fabric of space around them, black holes are simple objects. Due to their very nature they only have three characteristics: mass, spin (angular momentum), and electric charge. All the other characteristics got crushed away (well, technically they also have magnetic moment, but that is uniquely determined by the other three). All black holes have mass, but some have zero spin and others have zero charge.
There are four types of black holes. If it only has mass, it is a Schwarzschild black hole. If it has mass and charge but no spin, it is a Reissner-Nordström black hole. If it has mass and spin but no charge it is a Kerr black hole. And if it has mass, charge and spin it is a Kerr-Newman black hole. Since practically all natural astronomical objects have spin but no charge, all naturally occurring black holes are Kerr black holes, the others do not exist naturally. In theory one can turn a Kerr black hole into a Kerr-Newman black hole by shooting charged particles into it for a few months, say from an ion drive or a particle accelerator.
From the standpoint of extracting energy, the Kerr-Newman black hole is the best kind, since it has both spin and charge. In his The MacAndrews Chronicles, Charles Sheffield calls them "Kernels" actually "Ker-N-el", which is shorthand for Kerr-Newman black hole.
The spin acts as a super-duper flywheel. You can add or subtract spin energy to the Kerr-Newman black hole by using the Penrose process. Just don't extract all the spin, or the blasted thing turns into Reissner-Nordström black hole and becomes worthless. The attractive feature is that this process is far more efficient than nuclear fission or thermonuclear fusion. And the stored energy doesn't leak away either.
The electric charge is so you can hold the thing in place using electromagnetic fields. Otherwise there is no way to prevent it from wandering thorough your ship and gobbling it up.
The assumption is that Kerr-Newman black holes of manageable size can be found naturally in space, already spun up and full of energy. If not, they can serve as a fantastically efficient energy transport mechanism.
Alert readers will have noticed the term "manageable size" above. It is impractical to use a black hole with a mass comparable to the Sun. Your ship would need an engine capable of moving something as massive as the Sun, and the gravitational attraction of the black hole would wreck the solar system. So you just use a smaller mass black hole, right? Naturally occurring small black holes are called "Primordial black holes."
Well, there is a problem with that. In 1975 legendary physicist Stephen Hawking discovered the shocking truth that black holes are not black (well, actually the initial suggestion was from Dr. Jacob Bekenstein). They emit Hawking radiation, for complicated reasons that are so complicated I'm not going to even try and explain them to you (go ask Google). The bottom line is that the smaller the mass of the black hole, the more deadly radiation it emits. The radiation will be the same as a "black body" with a temperature of:
6 × 10-8 / M kelvins
where "M" is the mass of the black hole where the mass of the Sun equals one. The Sun has a mass of about 1.9891 × 1030 kilograms.
Jim Wisniewski created an online Hawking Radiation Calculator to do the math for you.
In The McAndrew Chronicles Charles Sheffield hand-waved an imaginary force field that somehow contained all the deadly radiation. One also wonders if there is some way to utilze the radiation to generate power.
In the table:
- R is the black hole's radius in attometers (units of one-quintillionth or 10-18 of a meter). A proton has a diameter of 1000 attometers.
- M is the mass in millions of metric tons. One million metric tons is about the mass of three Empire State buildings.
- kT is the Hawking temperature in GeV (units of one-billion Electron Volts).
- P is the estimated total radiation output power in petawatts (units of one-quadrillion watts). 1—100 petawatts is the estimated total power output of a Kardashev type 1 civilization.
- P/c2 is the estimated mass-leakage rate in grams per second.
- L is the estimated life expectancy of the black hole in years. 0.04 years is about 15 days. 0.12 years is about 44 days.
Table is from Are Black Hole Starships Possible?, thanks to magic9mushroom for this link.