Nuclear weapons stockpile stewardship

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Winston

Lorenzo von Matterhorn
Joined
Jan 31, 2009
Messages
9,560
Reaction score
1,748
Cool videos. The code being developed via "stockpile stewardship," a valid and necessary pursuit, also provides the code needed to design new weapons. Note that, as revealed in the first video, the construction of high powered lasers by the nuclear weapons labs for inertial confinement fusion began long before the need for "stockpile stewardship" brought about by the nuclear weapons test ban.

National Ignition Facility
12 Aug 2019



Note the implosion simulations:

Exascale Computing is Here: El Capitan
12 Aug 2019

 
Replaced video for dead link above (something in it that was classified?):



New one:

 
I've seen some old (70s-80s) videos on YouTube showing the manufacture of nuclear weapon components (NEVER the explosives and pit, BTW) for the B-61 that I'm amazed could be shown, not because I have any insider knowledge, but just because of the detail of what was shown in the unfortunately crappy quality old videos. I have no idea what maybe shouldn't have been shown, if anything, in that National Ignition Facility video which disappeared.
 
The international optics and nuclear communities have been following inertial confinement fusion (ICF) for decades now. Most of the interest has stemmed from hopes of controlled fusion, but thanks to the stockpile stewardship program, the NIF was made to be the largest of all ICF systems. They have only been increasing in size and complexity since the laser was developed in the 60’s and 70’s.

While the details of experiments are controlled by the users, results are published on a regular basis, and there's well enough to keep one busy for a long time.

https://lasers.llnl.gov
https://www.llnl.gov/news/publications
https://en.wikipedia.org/wiki/National_Ignition_Facility
https://en.wikipedia.org/wiki/Inertial_confinement_fusion
 
The international optics and nuclear communities have been following inertial confinement fusion (ICF) for decades now. Most of the interest has stemmed from hopes of controlled fusion, but thanks to the stockpile stewardship program, the NIF was made to be the largest of all ICF systems. They have only been increasing in size and complexity since the laser was developed in the 60’s and 70’s.

While the details of experiments are controlled by the users, results are published on a regular basis, and there's well enough to keep one busy for a long time.

https://lasers.llnl.gov
https://www.llnl.gov/news/publications
https://en.wikipedia.org/wiki/National_Ignition_Facility
https://en.wikipedia.org/wiki/Inertial_confinement_fusion
ICF is nice for fusion studies, but will most likely never be a fusion energy source but, and as I said above, "the construction of high powered lasers by the nuclear weapons labs for inertial confinement fusion began long before the need for "stockpile stewardship" brought about by the nuclear weapons test ban." Also, note how the construction date of the NIF coincides with the beginning of the comprehensive nuclear test ban. Always read between the lines.

So we're back to ICF's usefulness in "fusion studies," just not the peaceful kind. ;) ICF is great to enable the development of the computer code to design nukes in a supercomputer, something which also happens to be useful in "stockpile stewardship." ICF is exactly how thermonuclear bombs function - radiation pressure used for extreme adiabatic compression. That was the "H-Bomb Secret" the government tried to unsuccessfully prevent from being published 40 years ago:

https://progressive.org/magazine/november-1979-issue/

It actually wasn't much of a secret.

Note that the recent, ongoing restoration of US nuke test films which can be seen on YouTube (at least the ones they release) is, as they stated, to bring the 90% accuracy of true yield estimation done in the distant past using those films up to 99%. How is that useful? "Hey, lets take the blueprint design for this bomb under test in this restored film, now a video, run it through our constantly being perfected simulation code to see how close the result is to the test yield. We'll further perfect the code via data obtained via "stockpile stewardship" efforts and repeat."

Frankly, I'd like to see an end to the nuke test ban so we can see the computer designed weapons nuke designer Ted Taylor hinted at and I quoted here:

https://www.rocketryforum.com/threads/llnl-nuclear-test-films-on-youtube.145572/#post-1781285
 
Last edited:
What Inertial Confinement Fusion techniques have in common with thermonuclear weapons:

Hohlraum

https://en.wikipedia.org/wiki/Hohlraum

Inertial confinement fusion

The indirect drive approach to inertial confinement fusion is as follows; the fusion fuel capsule is held inside a cylindrical hohlraum. The radiation source (e.g., laser) is pointed at the interior of the hohlraum rather than at the capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays, a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion.

The X-ray intensity around the capsule must be very symmetrical to avoid hydrodynamic instabilities during compression. Earlier designs had radiators at the ends of the hohlraum, but it proved difficult to maintain adequate X-ray symmetry with this geometry. By the end of the 1990s, target physicists developed a new family of designs in which the ion beams are absorbed in the hohlraum walls, so that X-rays are radiated from a large fraction of the solid angle surrounding the capsule. With a judicious choice of absorbing materials, this arrangement, referred to as a "distributed-radiator" target, gives better X-ray symmetry and target gain in simulations than earlier designs.[1]

Nuclear weapon design

The term hohlraum is also used to describe the casing of a thermonuclear bomb following the Teller-Ulam design. The casing's purpose is to contain and focus the energy of the primary (fission) stage in order to implode the secondary (fusion) stage.
 
Just found this. At least the French are honest about something that would be obvious to any adversary.

Laser Mégajoule

https://en.wikipedia.org/wiki/Laser_Mégajoule

Laser Mégajoule (LMJ) is a large laser-based inertial confinement fusion (ICF) research device near Bordeaux France, built by the French nuclear science directorate, Commissariat à l'Énergie Atomique (CEA).

Laser Mégajoule plans to deliver over 1 MJ of laser energy to its targets, compressing them to about 100 times the density of lead. It is about half as energetic as its US counterpart, the National Ignition Facility (NIF). Laser Mégajoule is the largest ICF experiment outside the US.

Laser Mégajoule's primary task will be refining fusion calculations for France's own nuclear weapons.[1] A portion of the system's time is set aside for materials science experiments.[2]

Construction of the LMJ took 15 years and cost 3 billion Euros.[3] It was declared operational on 23 October 2014, when it ran its first set of nuclear weapon related experiments.
 
HPE and AMD power complex scientific discovery in world’s fastest supercomputer for U.S. Department of Energy’s (DOE) National Nuclear Security Administration (NNSA)
MARCH 4, 2020

DOE/NNSA’s El Capitan reaches 2 exaflops - more powerful than the top 200 fastest supercomputers in the world combined - to support nation’s nuclear security missions

https://www.hpe.com/us/en/newsroom/...nal-nuclear-security-administration-nnsa.html

Strengthening Nation’s Nuclear Stockpile, Security and Defense with Exascale Technologies

“As an industry and as a nation, we have achieved a major milestone in computing. HPE is honored to support the U.S. Department of Energy and Lawrence Livermore National Laboratory in a critical strategic mission to advance the United States’ position in security and defense,” said Peter Ungaro, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), at HPE. “The computing power and capabilities of this system represent a new era of innovation that will unlock solutions to society’s most complex issues and answer questions we never thought were possible.”

HPE’s Cray Shasta technologies, which were built from the ground up to support a diverse set of processor and accelerator technologies to meet new levels of performance and scalability, will enable the DOE’s El Capitan to meet NNSA requirements, which include the NNSA’s Life Extension Program (LEP), a critical part of stockpile stewardship that aims to modernize aging weapons in the U.S. nuclear stockpile that are to remain safe, secure, and effective.
LLNL is managing the new system for the NNSA and has developed emerging techniques that allow researchers to create faster, more accurate models for primary missions across stockpile modernization and inertial confinement fusion (ICF), a key aspect of stockpile stewardship.

Systems like the DOE’s El Capitan are ushering in a new class of supercomputing with exascale-class systems that are 1,000X faster than the previous generation petascale systems first introduced 12 years ago.

The new performance record of 2 exaflops (2,000 petaflops) will be more powerful than the Top 200 fastest supercomputers in the world combined and is an increase of more than 30% from initially projected estimates calculated seven months ago. This was made possible by a new partnership between HPE, AMD and the U.S. DOE to combine HPE’s Cray Shasta system and Slingshot interconnect, a specialized HPC networking solution, with next-generation AMD EPYC™ processors and next-generation AMD Radeon™ Instinct GPUs. The decision to choose this architecture was based on NNSA’s strategic, mission critical requirements.

El Capitan will be DOE’s third exascale-class supercomputer, following Argonne National Laboratory’s “Aurora” and Oak Ridge National Laboratory’s “Frontier” system. All three DOE exascale supercomputers will be built by Cray utilizing their Shasta architecture, Slingshot interconnect and new software platform.


49644053198_df9ba3caa1_c.jpg


 
About using supercomputers to simulate nukes:

Los Alamos National Lab
National Security Science
But Will It Work? - The Nuclear Weapon Stockpile Stewardship Program

https://www.lanl.gov/science/NSS/pdf/NSS_April_2013.pdf
Excerpt:

The nuclear weapons in the U.S. stockpile were designed and built to be replaced with new designs and builds every 10 to 15 years, or sooner if the U.S. defense strategies required it. Now these weapons have lived beyond their expected lifespans, and their components continue to age. Over time, plastics become brittle and break down, releasing gases. Electronic systems based on antiquated technologies like vacuum tubes and plastic-coated copper wire corrode. Adhesives bonding key components weaken. Metal coatings naturally deteriorate. Metals and metal joints corrode and weaken. Vibrations from transportation and deployment further impact components. Many of these issues are faced by every car owner. With years of environmental changes in temperature, pressure, and humidity, and in the presence of radioactive elements like plutonium and uranium, components degrade and may eventually fail to work.

In short, aging materials change and in so doing, change their physical, chemical, and mechanical properties; aged materials no longer behave as they once did. New behaviors can be unpredictable. Nuclear weapons must work, and perfectly, only if the president of the United States has authorized their use— and they must never work if the president has not authorized their use. Can an aging stockpile meet these demanding requirements?

[snip]

Real-world trial-and-error experimentation is sometimes called the “engineering approach” because it uses hands-on building and testing of theoretical concepts. This was largely the approach by which nuclear weapons were invented and by which they evolved for 47 years. Like Japanese swordsmiths, weapons scientists hypothesized, theorized, and experimented, using this and trying that with different materials, processes, and designs in very successful efforts to meet the requirements established by the U.S. military. For the job at hand, this level of knowledge was sufficient and was codified in weapon simulation computer programs. A deeper understanding was certainly desired and sought out, but it was not necessary in order to accomplish the Cold War mission. Regardless, better tools were needed to better understand thermonuclear weapons.

Still, scientists’ ability to explain and predict weapons phenomena got stronger, and an amazing body of knowledge grew, so they could say with some confidence that they understood some of the weapon physics. But by no means did they claim to understand all of what they did and saw. In fact, it was not uncommon for test results to contradict scientists’ best predictions and call into question what they thought they understood.

When reality did not match their predictions, the scientists were often forced to adjust their calculations until these matched test results, but without really understanding why these adjustments worked. It was like the early days of radio, when the scientists and engineers understood the principles of the device but lacked the predictive power to design the radios to respond exactly. Radio response was “tuned” (often by turning a knob) to achieve the final high-precision match required so that radio transmitters and receivers could work together. Indeed, the practice by nuclear scientists of massaging calculations until they fit their real-world test results was called “turning knobs.” The knobs were embedded in the weapon simulation computer codes. Thus, testing was done not only to see if a weapon worked, but also to try and eliminate particularly troubling knobs by gaining a better understanding of the weapon physics.

But then real-world underground testing in Nevada and deployment of new systems went away. What would take their place? Would the military retain confidence in systems aboard the submarines and planes and in missile silos? Could the president be assured that systems were safe, secure, and effective? Would allies and adversaries be convinced of the effectiveness of America’s nuclear deterrent?

[snip]

Computing science in the early 1990s was not up to the task. This was the era of floppy discs, Apple’s Newton and Macintosh computers, and Windows 95. In fact, the notion of being able to build computers that had the power, speed, and memory needed to accurately model and simulate a nuclear weapon explosion from first principles, much less in 3D, was thought nearly impossible by many.

An initial calculation done at this time by Lawrence Livermore National Laboratory concluded that more than 100 teraflops (100 trillion flops) would be required to execute the high-resolution 3D simulations, with sufficient accuracy, for the Stockpile Stewardship Program (SSP). But at the time, Livermore’s most powerful computer provided only 13.7 gigaflops (13.7 million fl ops). This meant that its computing power would need a 7,000-fold increase in less than a decade, implying a technological growth rate many times that given by Moore’s Law—the industry standard for predicting increases in computing power. To run the models and simulations then envisioned, the laboratories needed signifi cantly larger, faster, and more powerful computersthan Moore’s Law allowed.

[snip]

Incredibly, in 2005, the 100-teraflops goal was reached. But the SSP’s supercomputing needs were vastly underestimated. Consequently, in 2008, Los Alamos unveiled the world’s first petaflop (1.0 million billion flops) supercomputer, Roadrunner. Since Roadrunner, another petascale machine, Cielo, has come online at Los Alamos, and Livermore has stood up Sequoia at almost 16 petaflops. Los Alamos is proceeding with Trinity, a 30- to 40-petaflop machine. The machines at Los Alamos and at Livermore are working 24 hours a day, 7 days a week, 365 days a year. The demand for time on these and other machines by scientists is never-ending.

The SSP has successfully resolved problems related to aging, even problems in the original designs and manufacturing of some weapons, and has enabled corrections. It has made the life-extension programs for weapons a success. It has successfully resolved the need for some knobs in the weapon codes. It has provided simulations of nuclear weapons and their subsystems in 3D. It is important to understand that these 3D simulations are often referred to as “hero calculations,” given the amount time (weeks, months, and in some cases, years) required to set up the code and run the calculation, even on petascale machines. Supercomputers and the weapon codes have played a key role in all these successes. They will become even more important as the stockpile continues to age and the nation continues its moratorium on conducting real-world, full-scale tests. Supercomputer simulations have allowed scientists and engineers to discover phenomena previously hidden to real-world experimentation, making it necessary to ask more questions, change some theories, and explore new directions. As a result, the need for high-resolution 3D simulations is clear. Indeed, some weapons issues can be accurately addressed only in 3D.


-----------

Summit (supercomputer) - 148,600 teraflops

https://en.wikipedia.org/wiki/Summit_(supercomputer)
Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge National Laboratory, capable of 200 petaFLOPS, making it the fastest supercomputer in the world from November 2018 to June 2020.[5][6] Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.[7] As of November 2019, the supercomputer is also the 5th most energy efficient in the world with a measured power efficiency of 14.668 gigaFLOPS/watt.[8] Summit is the first supercomputer to reach exaflop (a quintillion operations per second) speed, achieving 1.88 exaflops during a genomic analysis and is expected to reach 3.3 exaflops using mixed precision calculations.[9]

ornllaunches.jpg


Sierra (supercomputer) - 94,640 teraflops

https://en.wikipedia.org/wiki/Sierra_(supercomputer)
Sierra or ATS-2 is a supercomputer built for the Lawrence Livermore National Laboratory for use by the National Nuclear Security Administration as the second Advanced Technology System. It is primarily used for predictive applications in stockpile stewardship, helping to assure the safety, reliability and effectiveness of the United States' nuclear weapons.

Sierra is very similar in architecture to the Summit supercomputer built for the Oak Ridge National Laboratory. The Sierra system uses IBM POWER9 CPUs in conjunction with Nvidia Tesla V100 GPUs.[1][3] The nodes in Sierra are Witherspoon S922LC OpenPOWER servers with two GPUs per CPU and four GPUs per node. These nodes are connected with EDR InfiniBand.

1200px-Sierra_Supercomputer_(48002385338).jpg
 
Back
Top