The Whirlwind computer was an incredibly important computer for advancing computer science, the tech for which was later used in the huge SAGE system:
The Story of Whirlwind Computer Operator Joe Thompson
IBM Sage Computer Ad, 1960
The Futuristic Cold War Era SAGE Air Defense Bunkers Looked Right Out Of A Kubrick Film
The sci-fi-esque bunkers were scattered across North America and sat ready to fend off nuclear-armed Russian bombers.
MARCH 14, 2019
Once unimaginable, transistors consisting only of several-atom clusters or even single atoms promise to become the building blocks of a new generation of computers with unparalleled memory and processing power. But to realize the full potential of these tiny transistors—miniature electrical...
Once unimaginable, transistors consisting only of several-atom clusters or even single atoms promise to become the building blocks of a new generation of computers with unparalleled memory and processing power. But to realize the full potential of these tiny transistors—miniature electrical on-off switches—researchers must find a way to make many copies of these notoriously difficult-to-fabricate components.
Now, researchers at the National Institute of Standards and Technology (NIST) and their colleagues at the University of Maryland have developed a step-by-step recipe to produce the atomic-scale devices. Using these instructions, the NIST-led team has become only the second in the world to construct a single-atom transistor and the first to fabricate a series of single electron transistors with atom-scale control over the devices' geometry.
The scientists demonstrated that they could precisely adjust the rate at which individual electrons flow through a physical gap or electrical barrier in their transistor—even though classical physics would forbid the electrons from doing so because they lack enough energy. That strictly quantum phenomenon, known as quantum tunneling, only becomes important when gaps are extremely tiny, such as in the miniature transistors. Precise control over quantum tunneling is key because it enables the transistors to become "entangled" or interlinked in a way only possible through quantum mechanics and opens new possibilities for creating quantum bits (qubits) that could be used in quantum computing.
In a big shift to their manufacturing operations – and a big political win domestically – TSMC has announced that the company will be building a new, high-end fab in Arizona. The facility, set to come online in 2024, will utilize TSMC’s soon-to-be-deployed 5nm process, with the ability to handle 20,000 wafers a month. And with a final price tag on the facility expected to be $12 billion, this would make it one of the most expensive fabs ever built in the United States.
Operating over a dozen fabs across the globe, TSMC is responsible for a significant share of global logic chip production, particularly with leading-edge and near-leading-edge processes. The company has become perhaps the biggest winner amidst the gradual winnowing of fabs over the past two decades, as manufacturer after manufacturer has dropped out, consolidating orders among the remaining fabs. And with GlobalFoundries dropping out of the race for cutting-edge manufacturing nodes, TSMC is one of only three companies globally that's developing leading-edge process nodes – and one of the only two that’s a pure-play foundry.
This success has become both a boon and a liability for TSMC. Along with Korean rival Samsung, the two companies have seen massive growth in revenues and profits as they have become the last fabs standing. As a result, TSMC serves customers both locally and globally, particularly the United States and China, the two of which are not enjoying the best of relations right now. This leaves TSMC trapped in the middle of matters – both figuratively and literally – as China needs TSMC to produce leading-edge chips, and the United States is now increasingly reliant on TSMC as well following GlobalFoundries’ retreat.
As a result, the Taiwan Semiconductor Manufacturing Company is going to do something it’s never done before, building a near-leading-edge fab in the US, outside of its home base of Taiwan. The new facility, set to be constructed in Arizona, will use the company’s 5nm process, which is currently TSMC’s most advanced manufacturing process. And while this will no longer be the case by the time it comes online in 2024, when 3nm processes are likely to be available, it would still make the Arizona facility among the most advanced fabs in the world, and by far the most advanced contract fab in the United States.
The Arizona facility would be joining TSMC’s other US fab, which is located in Camas, Washington. It, like TSMC’s other non-Taiwanese-fabs, is based around older technologies, with the Camas fab in particular focusing on building flash products using relatively large process nodes (350nm to 160nm). As a result, the Arizona fab represents a significant shift for TSMC; it’s not the first US fab for the company, but it’s the first time TSMC has built such an advanced fab in another nation.
All told, the Arizona fab is set to be a medium-sized facility – a “megafab” in TSMC parlance – despite its use of an advanced manufacturing node. The 20,000 wafers per month throughput of the fab is well below TSMC’s largest “gigafabs” in Taiwan, which can move more than 100,000 wafers per month. As a result while the fab will add to TSMC’s 5nm capacity, it won’t become a massive part of that capacity. Though with an expected price tag of $12 billion, it will still be a very expensive facility to build.
According to TSMC, the primary impetus for building the fab – and especially to build it in the United States instead of Taiwan – is specifically to have high-end production capacity within the United States. With GlobalFoundries dropping out of the race for leading-edge nodes, the US government and other sensitive fabless chip designers are in want of another leading-edge facility within the US to build their chips. Given their location, TSMC’s Taiwanese fabs are seen as security risk, and the US would prefer to be self-reliant rather than relying on a foreign partner – a concern that’s been magnified by the current coronavirus pandemic and the supply chain issues that has created.
Magnetic tape and hard disk drives hold much of the world’s archival data. Compared with other memory and storage technologies, tape and disk drives cost less and are more reliable. They’re also nonvolatile, meaning they don’t require a constant power supply to preserve data. Cultural institutions, financial firms, government agencies, and film companies have relied on these technologies for decades, and will continue to do so far into the future.
But archivists may soon have another option—using an extremely fast laser to write data into a 2-millimeter-thick piece of glass, roughly the size of a Post-it note, where that information can remain essentially forever.
This experimental form of optical data storage was demonstrated in 2013 by researchers at the University of Southampton in England. Soon after, that group began working with engineers at Microsoft Research in an effort called Project Silica. Last November, Microsoft completed its first proof of concept by writing the 1978 film Superman on a single small piece of glass and retrieving it.
With this method, researchers could theoretically store up to 360 terabytes of data on a disc the size of a DVD. For comparison, Panasonic aims to someday fit 1 TB on conventional optical discs, while Seagate and Western Digital are shooting for 50- to 60-TB hard disk drives by 2026.
As we celebrate the 50th anniversary of Stanley Kubrick’s 2001: A Space Odyssey, some reflections on the most famous character in the film: HAL, the on-board computer.
One of the most dramatic scenes in 2001 occurs near the end, where astronaut Dave Bowman manages to get back into the ship and systematically dismantles HAL’s higher mental functions, in effect giving HAL a lobotomy. As HAL’s circuits are disconnected, “he” recounts his creation in Urbana, Illinois. Why Urbana?
In 1968, the University of Illinois at Urbana-Champaign was at the center of research into what we now call “supercomputers.” There, Professor Daniel Slotnick was designing a computer that had not one but 64 separate processors, wired in parallel. It was designed to attack problems that ordinary, single-processor computers could not handle.
The “ILLIAC-IV” (Illinois Automatic Computer, # 4) was later installed at the NASA Ames Research Center in Mountain View, California, where it did aerodynamic calculations. Like the initial optimism for speech recognition, Slotnik’s ideas for a parallel computer did not bear fruit until many decades later, but in 1968 his work had gotten a lot of attention.
Finally, as Dave disconnects HAL’s circuits, the computer begins to sing a song: “Daisy Bell,” composed in 1892 by Harry Dacre and known to us all as “A Bicycle Built for Two.” Why that song? In 1961, a team of researchers at Bell Telephone Laboratories in Murray Hill, New Jersey, programmed an IBM 7094 computer to sing the song. The program was the beginning of computer-synthesized speech and music. The Bell Labs scientists programmed the 7094 using punched cards.
Vortex lasers could help photons carry more data, a new study finds.
Modern optical telecommunications encode data in multiple aspects of light, such as its brightness and color. In order to store even more data in light, scientists are exploring other properties of light that have proven more difficult to control.
One promising feature of light under investigation has to do with momentum. Light has momentum, just like a physical item moving through space, even though it does not have mass. As such, when light shines on an object, it exerts a force. Whereas the linear momentum of light exerts a push in the direction that light is moving, angular momentum of light exerts torque.
A beam of light can possess two kinds of angular momentum. The spin angular momentum of a ray of light can make objects it shines on rotate in place, whereas its orbital angular momentum can make objects rotate around the center of the ray. A beam of light that carries orbital angular momentum resembles a vortex, moving through space with a spiraling pattern like a corkscrew. Whereas a conventional light beam is brightest at its center, vortex beams have ringlike shapes that are dark in the center, due to how some of the waves making up vortex beams can interfere with one another.
A potentially extraordinarily useful property of vortex beams is that they do not interfere with each other if they all possess different twisting patterns. This means a theoretically infinite number of vortex beams can get overlaid on top of each other to carry an unlimited number of data streams at the same time.
However, until now, all microchip-scale vortex lasers firing at telecommunications wavelengths were each limited to transmitting a single orbital angular momentum pattern. At the same time, existing detectors for vortex beams relied on complex filtering techniques using bulky components, which prevented them from being integrated on chips and made them incompatible with most practical optical telecommunications approaches.
Now scientists at the University of Pennsylvania and their colleagues have made breakthroughs with both vortex lasers and vortex beam detectors. They detailed their findings in two studies in the 15 May issue of the journal Science.
Q: I read your statement about how light has momentum despite the fact that it has no mass. My question to you is regarding gravity in black holes. It is said that light can’t escape the enormous gravitational force in black holes; however, is it not true that gravity is directly proportional to the object’s MASS and inversely proportional to the distance between the two objects (Newtonian, I think). If so, light has no mass. So how would light be effected by this phenomenon??? Thanks for your enthusiasm in physics.
Dan Sweeney - Dan Sweeney (age 16) Thayer Academy, Braintree MA, USA
A: The use of words can make a lot of confusion. Unfortunately, the word "mass" has been used in two different ways in physics. One was the way Einstein used it in E=mc2, where mass is really just the same thing as energy (E) but measured in different units. This is the same "m" that you multiply velocity by to find momentum (p), and thus is sometimes called the inertial mass. It's also the mass that provides the source of gravitational effects. Light has this "m" because it has energy. So it is indeed affected by gravity- not just in black holes but in all sorts of less extreme situations too. In fact, the first important confirmation of General Relativity came in 1919, when it was found that light from stars bends as it goes by the Sun.
The other way "mass" is often used, especially in recent years, is to mean "rest mass" or "invariant mass", which is sqrt(E2p2*c2)/c2. This is invariant because it doesn't change when you describe an object at rest or from the point of view of someone who says it's moving. Obviously that's a good type of "mass" to give when you want to make a list of masses of particles. For a light beam traveling in a single direction, E=pc, so this "m" is zero. There is no point of view from which the light is standing still!
However, once you consider light traveling in a variety of directions, the E's from the different parts just add up to give the total E but the vector p's don't. In fact the total p can be zero if there are beams traveling opposite ways. So for many purposes the older definition of m (the inertial mass) is more convenient than the invariant particle mass, since it's the inertial mass that's just the sum of the inertial masses of the parts. For light moving equally in all directions, like the light bouncing around inside a star, total p is zero, so both definitions just give m=E/c2.