Not for rocketry, but still cool electronics

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Besides the fact that AMD CPUs are, once again, the best choice for price/performance despite the concerted efforts by Intel to destroy them via illegal behavior, an effort which nearly succeeded, it is BECAUSE of Intel's behavior in that respect that I will NEVER AGAIN knowingly buy a product with "Intel Inside". Period. Watch this. It's mind blowing.

Intel - Anti-Competitive, Anti-Consumer, Anti-Technology
Jul 26, 2017

[video=youtube;osSMJRyxG0k]https://www.youtube.com/watch?v=osSMJRyxG0k[/video]
 
Look at what AMD sent this reviewer! Same sent to other YouTube reviewers including to one who has only 35k subscribers:

[video=youtube;s0H65usXsuU]https://www.youtube.com/watch?v=s0H65usXsuU[/video]

Threadripper CPU installation onto mobo near end of video:

[video=youtube;yReJcUbaIow]https://www.youtube.com/watch?v=yReJcUbaIow[/video]
 
Equally intriguing are their new high end workstation graphics cards, which have an onboard SSD for extremely large models, texture caching, etc.

As a sometimes gamer, the idea of having all textures pre-cached on load opens up some fairly exciting open-world possibilities.
 
Equally intriguing are their new high end workstation graphics cards, which have an onboard SSD for extremely large models, texture caching, etc.

As a sometimes gamer, the idea of having all textures pre-cached on load opens up some fairly exciting open-world possibilities.
From what I've seen so far, anyone who makes a living from video content creation or who uses CAD software that can use as many cores and threads as are made available in a system would be nuts not to build a Threadripper system. Gaming not as much because the games aren't written to use threads beyond what the former Intel monopoly has made available on the desktop and, as you point out, it's the graphics card where most of the work is done. However, with the high end desktop CPUs that AMD is now offering, that will change with time. From a price/performance aspect alone, AMD deserves to own the vast majority of Intel's market in every sector right now with the exception of mobile. AMD mobile processors aren't due until early next year IIRC.

More coolness:

AMD unveiled something truly remarkable today – a server rack that has a total processing power of 1 PetaFLOPs. That’s 10 to the power of 15 floating point operations per second. Here’s the kicker though: a decade ago in 2007, a computer of the same power would have required roughly 6000 square feet of area and thousands of processors to power. A decade ago, this would have been one of the most powerful supercomputers on Earth, and today, its a server rack.

The server rack, ahem supercomputer, named Project 47 is powered by 20x EPYC 7601 32 Core processors and around 80x Radeon Instinct GPUs. It supports around 10 TB of Samsung Memory and 20x Mellanox 100G cards as well as 1 switch. All of this is fitted into a server rack that is roughly the height of 1.25 Lisa Su’s with an energy efficiency of 30 GFLOPs per watt. That means the project 47 super computer consumes around 33,333 watts of electricity. Project 47 will be available from Inventec and their principal distributor AMAX sometime in Q4 of this year.

Back in 2007, you would have found the same power in a supercomputer called the IBM Roadrunner. This was a super computer project that was once the most powerful, well, super computer of its time and built by AMD and IBM for the Los Alamos National Laboratory. The cluster had 696 racks spanning an area of 6000 square feet and consumer 2,350,000 watts of electricity. The cluster consisted primarily of around 64,000 dual core Opteron CPUs and some accelerators.

So basically in a little over 10 years, AMD has managed to make a system that consumes 98% less power and takes up 99.93% less space. We are not yet sure how much Project 47 will cost, but we are pretty sure it will be less than the US $100 Million cost of the original Roadrunner. If that isn’t the epitome of modern computational advances, I don’t know what is.


project-47-server-rack.jpg
 
Why Intel is toast on price/performance until they design their own Infinity Fabric (multi-die) tech:

AMD MCM v. Monolithic Cost Savings

AMD-EPYC-MCM-v-Monolithic-Cost-Delta.jpg
 
Didn't know about the laser destruction part:

All semiconductor microprocessors are manufactured on a silicon wafer. Simply put, each silicon wafer goes through a long photochemical process to have circuitry photolithographically etched onto it, to convert that purified sand into a functional electrical device. It’s a complex multi-step process that takes months from start to finish. Tiny defects in the wafer inevitably occur, and the end-product is never 100% perfect.

Not all “chips” on a wafer end up being equal. Some, especially those in the periphery, end up with the shorter end of the stick. Chips closer towards the center usually come out the best. If we’re talking about Vega, then these cream of the crop chips end up as your Vega 64s. If we’re talking about NVIDIA’s GP102, then these end up in the the Titan Xp cards. This is also true for CPUs. AMD only uses the best Ryzen dies for its high-end Threadripper CPUs.

The best chips go into the “best” products. Slightly defective — perhaps they clock lower or require more voltage, have some dysfunctional shaders so on and so forth — but otherwise fully functional chips go into making the cut-down variant of the same product. In Vega’s case that’s the Vega 56. In GP102’s case that’s the GTX 1080 Ti. This is also why not all Fury or 290 cards could be unlocked to become R9 Fury X or 290X cards.

Chip makers salvage these slightly defective die to maximize the number of usable chips that they can sell. If you’re going to chuck out every defective die on the wafer you won’t end up with much left to sell. CPU & GPU makers usually go through an extra step to make sure that these cut-down chips stay cut down, and that’s by lasering off the unused hardware. NVIDIA and Intel have consistently done this. Things have been a little more lenient on the AMD side. Over the years we’ve seen cut-down chips that had gone through the laser treatment and some that hadn’t. Fiji is a good example of a GPU that hadn’t and is why R9 Furys were unlockable to R9 Fury Xs.


xwafer-csp.jpg.pagespeed.ic_.lKpmnqMgiz.jpg
 
It has been said that silicon is the most expensive real estate on the planet. They want to get the most out of each wafer.
The component density is simply amazing these days. Ryzen: 4.8 billion per 8-core "Zeppelin" die; die size: 192 mm². I'd love to go back to the Eniac team and show them that. They'd pass out.

The last CPU that Motorola laid out by hand according to an interview I watched of those involved with the 6800 and 68000 series:

b7af2f481d60c6661f80f0aa222dacc0.jpg


The Motorola 6809 ("sixty-eight-oh-nine") is an 8-bit microprocessor CPU with some 16-bit features from Motorola. It was designed by Terry Ritter and Joel Boney and introduced in 1978. It was a major advance over both its predecessor, the Motorola 6800, and the related MOS Technology 6502. Among the systems to use the 6809 are the Dragon home computers, TRS-80 Color Computer, the Vectrex home console, and early 1980s arcade machines including Defender, Robotron: 2084, Joust, and Gyruss.

The 6809 was the first microprocessor able to use fully position-independent, or reentrant, or both, code without the use of difficult programming tricks. It also contained one of the first hardware-implementations of a multiplication instruction in an MPU, full 16-bit arithmetic, and an especially fast interrupt system.


M6809 - 9,000 transistors, 20.09 mm² die = 448 transistors per mm²

Ryzen Zeppelin - 4.8 billion transistors, 192 mm² die = 25 MILLION transistors per mm²
 
Introducing Handle

https://spectrum.ieee.org/automaton/robotics/humanoids/boston-dynamics-handle-robot

[video=youtube;-7xvqQeoA8c]https://www.youtube.com/watch?v=-7xvqQeoA8c[/video]

Tesla Model S Battery Teardown

Looks like there may be 30 modules in the battery from a junked Model S, they pay the junkyard $10k for the entire battery (price found in the comments), charge $1,375 for each module = 30 x $1,375 = $41,250; $41,250 - $10,000 = $31,250 potential profit for a day's work. Not bad.

[video=youtube;NpSrHZnCi-A]https://www.youtube.com/watch?v=NpSrHZnCi-A[/video]

The sale page for the modules:

https://www.evwest.com/catalog/product_info.php?products_id=463&osCsid=rp0cj2i33tp2j88tnj80dto4e5

Now all we need is for someone to strap rockets on that thing..

That is just mind bogglingly amazing...
 
The Inside Story of the Great Silicon Heist

https://www.wired.com/story/inside-story-of-the-great-silicon-heist

Excerpt:

When the vast majority of manufacturers reach the end of this process, their polysilicon is as much as 99.999999 percent pure, or “8n” in industry parlance. This means that for every 100 million silicon atoms, there is but a single atom’s worth of impurity. While that may sound impressive, such polysilicon is only pure enough for use in solar cells—relatively simple devices that don’t need to perform complex calculations, but rather just create electrical current by letting sunlight agitate the electrons in silicon atoms. (About 90 percent of all polysilicon ends up in solar cells.)

What the Mitsubishi plant in Alabama produces, by contrast, is 11n polysilicon, marred by just one impure atom per every 100 billion silicon atoms. This polysilicon, known as electronic-grade, is destined to be made into the wafers that serve as the canvases for microchips. Wafer makers melt down 11n polysilicon, spike it with ions like phosphorus or boron to amplify its conductivity, and reshape it into ingots of monocrystalline silicon. These ingots are then sliced into circular pieces about a millimeter thick, at which point they’re ready to be festooned with tiny circuits inside the clean rooms of Micron or Intel.

Mitsubishi’s facility on the Theodore Industrial Canal is one of fewer than a dozen plants worldwide that produce 11n polysilicon. “The barriers to getting to that sort of purity level are extremely high,” says Johannes Bernreuter, founder of a German research firm that covers the polysilicon market. “You have to imagine how many atoms there are in a cubic centimeter of polysilicon, and how only a few atoms of impurity in there can ruin everything.”

There has been no single key to Mitsubishi’s technical success with 11n polysilicon. Insiders credit not only the precision of the engineers who oversee the daily minutiae of the manufacturing process but also the attention that was paid to building the plant and its components to exacting specifications. Yet Mitsubishi’s meticulousness does not seem to have extended to the more elementary task of security.

[snip]

SHORT AND WELFORD soon confessed to the true and astonishing scope of their enterprise: They had stolen some 43 tons of electronic-grade polysilicon over five-plus years, earning more than $625,000 in the process. That sum, of course, was just a fraction of the 11n polysilicon’s actual worth had it been packaged and sold by Mitsubishi. What Short and Welford had done was akin to swiping some of the world’s finest 18-year-old Scotch from a distillery and then selling it to a liquor store as off-brand rye.
 
Consequences of a Bad LED
30 Sep 2017

https://darkerview.com/wordpress/?p=22685

"A bad indicator LED, a simple ten cent part brought the Keck 1 telescope to a stop this last week.

How can that be? Usually an indicator is just that, an indicator. While an LED may indicate a problem it is rarely the cause of the problem."


CD09-39-09273-DC.jpg
 
[video=youtube;m3afLNAE4_U]https://www.youtube.com/watch?v=m3afLNAE4_U[/video]
 
1959 era DEC PDP-1 at the Computer History Museum tour with surprisingly impressive demo programs run by one of the restoration team members. It's 1024x1024 display was a radar display tube with fast light pen drawing capability. Four bad solder joints and three bad cards were found during restoration. 27,376.5 hours of operation on the meter since the machine was built. Card and backplane wiring sides of racks shown.

[video=youtube;1EWQYAfuMYw]https://www.youtube.com/watch?v=1EWQYAfuMYw[/video]
 
A fascinating blog post about early-80s programming by Atari's celebrated coder Landon M. Dyer who coded what is renowned to be by far the best and most arcade accurate 8-bit system version of Donkey Kong. He programmed it with ZERO help from Nintendo from which Atari had bought the rights to reproduce the game for their 8 bit computer line. He recreated it, entirely in assembly language of course, simply by playing the arcade game machine and recreating from scratch what he saw and heard. Links to his source code are in another of his blog posts found below.

Donkey Kong and Me
Posted on March 4, 2008 by landon

https://www.dadhacker.com/blog/?p=987

In the fall of 1981 I was going to college and became addicted to the Atari arcade games Centipede and Tempest. I knew a little bit about the hardware of the Atari 400/800 home computer systems, and decided to make a scary purchase on my student budget and buy an Atari 400 and a black and white TV (which was all I could afford). I messed around in Basic for a while, then bought an Assembler/Editor cartridge and started hacking away on a Centipede clone. I didn’t have much to go on in terms of seeing prior designs for games and had to figure everything out myself. Like most of the school problems, you really just have to work things out with a few hints from the textbooks and lectures.

Anyone who’s worked with that Asm/Editor cartridge probably bears the same deep emotional scars that I do. It was unbelievably slow, the debugger barely worked, and I had to remove comments and write in overlays of a couple K in order to squeeze in enough code. My game, which I called Myriapede, took about three months to write. I still have the original artwork and designs in my files; graph paper marked up with multi-colored pens, with the hexadecimal for the color assignments painstakingly translated on the side.

[I had to guess at colors. All I had was that cheap black and white TV, and I had visit a friend’s and his color TV for a couple hours in order to fine tune things].

The Atari Program Exchange (a captive publishing house) was holding a contest. The grand prize for the winning game was $25,000. I’d spent a semester of college blowing off most of my courses and doing almost nothing except work on Myriapede. I finished it with a week or two to spare and submitted to the contest.

A few weeks after I mailed Myriapede off to the contest, I got a letter from Atari that said (1) they were very impressed with the work, but (2) it looked to them like a substantial copy of Centipede (well, it was) and that they’d rejected it for that reason. The subtext was they would probably sue me if I tried to sell it anywhere else, too. I was crushed. I wound up going to a local user group and giving a couple copies of it away; I assume that it spread from there. I hear that people liked it (“best download of 1982” or something like that).

A few weeks later I got a call from Atari; they wanted to know if I was interested in interviewing for a job. I was practically vibrating with excitement. I flew out and did a loop, and made sure to show Myriapede to each interviewer; it was a conversation stopper every time. Until they saw it they kind of humored me (“yeah, okay, you wrote a game”), then when the game started up they started playing it, got distracted and (“ahem!”) had to be reminded that they were doing an interview! One of the guys I talked to was the author of Atari’s “official” Centipede cartridge. He said on the spot that my version was better than his.

A couple weeks later they gave me an offer. Atari moved my single roomful of stuff out to California. I flew out and spent two weeks in a hotel waiting for my things to arrive; Atari wanted me out there real bad.

Now, there were two popular arcade games that I simply could not stand; the first was Zaxxon, a stupid and repetitive scrolling shooter. The second was Donkey Kong — it was loud, pointless and annoying. Of course, the reason they wanted me in California was so I could work on a Donkey Kong cartridge. After a few moments of dispair (and faking enthusiasm in front of my bosses) I gritted my teeth, got a roll of quarters and spent a lot of time in the little arcade that my hotel had, playing the DK machine there and getting to know it really, really well.

I should explain how Atari’s Arcade conversions group worked. Basically, Atari’s marketing folks would negotiate a license to ship GameCorp’s “Foobar Blaster” on a cartridge for the Atari Home Computer System. That was it. That was the entirety of the deal. We got ZERO help from the original developers of the games. No listings, no talking to the engineers, no design documents, nothing. In fact, we had to buy our own copy of the arcade machine and simply get good at the game (which was why I was playing it at the hotel — our copy of the game hadn’t even been delivered yet).

So I played about as much Donkey Kong as I could stand, and started fiddling around with ideas. I wrote a 25-30 page design document that broke out the work into modules, and estimated the work at five months (this was early November of 1982) and handed it to my boss, Ken, with some trepidation. Was it good enough? Would they send me packing for not being a real designer and games programmer?

“We’re totally blown away by that spec,” said Ken. I’d simply enumerated the objects in the game, written some psuedo code for some major game modules, and assumed that it was a starter for a real specification. But everyone else treated it like the whole thing. I just needed to code it up. That was kind of scary.

“Marketing wants it by Christmas,” said Ken. I had made a careful estimate, and came up with about 150 days of work. There was no way the game would happen in a couple of weeks, but the sense of pressure was clear. With nothing else to do (besides find an apartment and wait for my stuff to arrive), I began to spend almost every waking hour at work. I did my first ever all-nighter, cranking the stereo notch by notch to keep pace with a guy in the office next to mine who was also doing an all-nighter. The company cafeteria was open for three meals a day.

The neat thing is that once you’ve gotten into a project to this extent, the project tends to write itself and you’re just along for the ride. Life is defined by work, and then the boring eating/sleeping stuff. I know that sounds hellish, but it’s really a tremendous amount of fun. I was was like 21 years old and being paid to do something that I think I would have done for free.

We used a Data General minicomputer, an MV/8000, for cross-development. This was the machine that Tracy Kidder’s book Soul of a New Machine was all about. While it wasn’t a VAX running Unix (which I would have preferred) it was still pretty easy to use and had some decent tools (no Emacs, though). We used a version of the Atari Macro Assembler that had been ported to the MV/8000, and that was worlds better than the miserably slow Assembler/Editor cartridge I’d done Myriapede on, but everything had to be downloaded to our development systems at 9600 baud, so turnaround time became a big issue toward the end of a project, especially since we had to share the MV/8000 with fourty or fifty other people during the day, just like the overloaded mainframe back in college. I’d often stay late, and after about six PM the systems were pretty fast again (five minutes, instead of nearly an hour).

– – – –

My very first day at work I arrived at my office after orientation and found an Atari 800 computer in a boxes. I spent a little while setting the machine up, got it working, and went to get coffee.

When I returned, a staffer appeared in my door. “Oh,” she exclaimed, “You knew how to set up your computer! I was going to do that.”

“Well, thanks, but…” Didn’t everybody know how? Setting up an Atari computer wasn’t amazingly simple and obvious, but it wasn’t all that hard, either.

It was a portent of things to come. My first officemate didn’t know how to set up his computer. He didn’t know anything, it appeared. He’d been hired to work on Dig Dug, and he was completely at sea. I had to teach him a lot, including how to program in assembly, how the Atari hardware worked, how to download stuff, how to debug. It was pretty bad.

That would be a general theme throughout my tenure at Atari. Newly hired people didn’t necessarily know how to do their jobs, and I spent a lot of time helping them figure stuff out that they should have known in order to land a job in the first place. Atari’s hiring practices were not very careful.

– – – –

I’d been writing in C for a number of years, and I developed a sort of pidgin C that I used in fleshing out modules. I’d write a few pages in this high-level pseudo-C, then spend half a day “compiling” it into 6502 assembler. Sometimes a significant chunk of code would work the first time (this is a scary experience, really it is).

The other thing I “got” somehow was that comments were important. I’d seen a bunch of OS code (including the 400/800 OS sources) and it was really nice and understandable. But most of the game sources I saw in the consumer division were crap, just absolute garbage: almost no comments, no insight as to what was going on, just pages of LDA/STA/ADD and alphabet soup, and maybe the occasional label that was meaningful. But otherwise, totally unmaintainable code. For the most part that was okay in the games industry, since almost none of the code in the company was ever re-used or shared (the exception being well-debugged subroutines in the Atari Coin-Op division for doing math and operating the coin mechanisms of the arcade machines).

I think that DK is one of the best-commented consumer games that Atari shipped (Super Pac-man is better, but it arguably didn’t ship). Customers don’t see comments, but other engineers do, and it’s worthwhile for them to learn from what you’ve done. For instance, Mario’s jump moves are derived from basic physics of motion, and the calculus-based equations are in the source, nicely formatted so you can see where the magic equates just below came from. After DK shipped, a cow-orker of mine got a copy of the source listing, spent a week reading it and said that he was blown away (“I don’t know how you could have typed all that, certainly not in just five months, and when I saw the motion stuff my jaw hit the floor.”) Blush. Code should both entertain and educate.

Donkey Kong shipped in mid-March of 1983. I vaguely recall a small party at work, but mostly I was glad it was all over.

– – – –

Technical details. Kong is in graphics mode $E (192 scanlines by 160 color clocks wide). When a level is started up, the background is stamped once. Barrels and other creatures are XOR’d onto the screen (I had some mask-and-repaint code at one point, but it was way too slow). Mario is a few player objects (three, I think). The “prize” objects (umbrellas, etc.) are the remaining players. The XOR graphics are pretty annoying to me, but most other people didn’t seem to mind and some people even thought it was cool.

All of the sound was done by Brad Fuller. Mona Lundstrom did a lot of the graphics design (but I wound up replacing most of it). The ‘cartoon’ sequences were given to another engineer, whose code I had to entirely replace (he originally wanted to do the job in FORTH, and didn’t understand that the game couldn’t afford to devote half the cartridge space to a FORTH interpreter just to make his life easier).

At its peak DK was about 20K of code, and it had to go on a diet to fit in the 16K cartridge; a lot of the images were compressed (notice that Kong himself is symmetrical). Towards the end I was crunching out only a few bytes a day, and it shipped with maybe a dozen bytes free.

There’s an easter egg, but it’s totally not worth it, and I don’t remember how to bring it up anyway (something like: Die on the ‘sandpile’ level with 3 lives and the score over 7,000).

For tuning difficulty, I slowed the game way down and simply made sure that it was possible to play. Some of the object movement is random, but should be within beatable constraints, assuming you are fast enough.

– – – –

The first division meeting I went to strongly hinted at the future of Atari. It was greek to me, but the basic message from management was that sales were slowing, margins were plummeting, and that the company was going to have to restructure to stay profitable.

The building next to mine was the first to go; Atari used it to manufacture the 2600 game console. They moved the building’s manufacturing overseas and laid off most of the people who worked in it.

There were some distant purges in marketing. The little “conversion” group of 8 programmers I was in had been moved to a satellite location far away from any of Atari’s major buildings, so we were pretty isolated from what was going on, but even from a distance it was clear that things weren’t going well. The game industry had essentially crashed, and Atari was putting millions of unsold cartridges into landfills. All of the mistakes that wild success had covered up were coming around to bite hard.

My office-mate had finally finished Robotron. By request, she made three versions of the ROM image, located at different ROM addresses. Unfortunately, the Q/A staff was only able to test two of the images. Guess which image Atari sent to be manufactured? Guess which image had a fatal bug? I saw a hardware engineer struggle to come up with a cheap gate-or-two fix that would make the game work; only a few bytes of it were wrong. In the end, Atari threw $200,000 worth of ROMs away.

I have the impression that mistakes like that were being made all over. This was compounded by the fact that games were just not selling; fueled by time-to-market, Atari marketing had forced its engineers to write games that lacked polish and fun, and that practice had come back around. People were bored with playing the same old junk.

There were layoffs and reorgs every few months. Our little group moved to one corner of the Coin-Op division’s building; a consolidation to save money. I was working on Super Pac-Man and nobody seemed to care, so I took my time on it and did a good job.

Eventually Jack Tramiel bought the parts of Atari that he wanted, and I would up working on the Atar ST, but that’s another story.


----------

DK source code link
Posted on August 30, 2008 by landon

https://www.dadhacker.com/blog/?p=1047

Curt Vendel (who’s been groveling through a bunch of old Atari backup tapes for a number of years) has found and posted the source code to the Atari 800 version of Donkey Kong.

Here’s a pointer to the forum thread: https://www.atariage.com/forums/index.php?showtopic=130904

Update: Mirrored here: https://www.dadhacker.com/Downloads/Donkey_Kong.zip

Certainly brings back memories. There’s less code than I remember, and some bits of it are really a mess, but it’s fun to see this again. I should add that you’ll just see a bunch of assembly code, with not a whole lot of insight into how the game was actually developed (when I finally got to see the source for Quake, I remember feeling somewhat disappointed — “That’s it? My God, that’s a stupid hack.” If you take the trouble to look, I’d not expect revelation, just a bunch of code written by a young guy in a hurry).
 
Iron Curtain era PC:

ROMANIA’S 1980S ILLICIT DIY COMPUTER MOVEMENT

https://hackaday.com/2017/11/07/romanias-1980s-illicit-diy-computer-movement/

https://hackaday.io/project/1411-xor-hobby-a-vintage-z80-computer-prototype

This project was started when I was 18, back in 1987, and developed it for a few years.

Hypertext (HTML) and WWW were not invented yet, so all the schematics (hardware) were designed starting from a few (paper) books and some data sheets. I was living in Bucharest, Romania, under the communist dictator Ceausescu. In 1987 Berlin Wall was still in place, so Integrated Circuits (IC's) produced in western countries were not available in shops. Even IC's fabricated in Romania or Russia were not available in shops so i bought almost every piece of hardware from the black market.

The PCB was an universal test board. Soldered wrapping wire, not wrapped, was used for all the electrical connections. To be able to run CP/M applications I wrote a CP/M BIOS using Z80 assembler.

At that moment in time, the "Hobby" computer was cut of the edge technology:

- compatible with both of the major Z80 operating systems of the time: CP/M and ZX Spectrum
- single (256 x 192) and double (512 x 192) screen resolution
- hardware switchable Z80 clock 3.5 MHz (standard) or 7 MHz (double)
- software memory pagination: Z80 can address maximum 64 KB of memory, but this system had a total of 114 KB (64 + 16 KB of RAM and 32 + 2 KB of EPROM)
- first overclock that i knew so far (at that moment the maximum clock frequency for a Z80 microprocessor was 6 MHz, mine was working at 7 MHz)
- first "overburn" on floppy disks that i knew so far (similar with CD overburn but on floppy disks): the standard file system had normally 80 tracks on a floppy, mine had up to 85 tracks.

More then two decades later i found the "Hobby" prototype full of dust, and give it a try.

Guess what, it's still working!


5382331402262596379.jpg


9262891402262605908.jpg


5348811402262613159.jpg
 
We used to have a computer for our pilot project in Melbourne (Australia) tracking buses back in the early 80's, before a major project to fit out the whole fleet was instigated. It was wired the same way. Interestingly whoever built it used cheap IC sockets (not the nice machined-pin sort in the pics above) so it was very unreliable. About twice a week someone (usually me as a trainee technical officer) was sent in there to gently tap each IC with the back of a screwdriver and hit the reset button. I got sick of it very quickly so over a few days I scheduled some off-peak downtime and changed the sockets from cheap Molex sockets to the gold Augat. Only rebooted about every six months after that :)

I have used the same technique myself for a few projects. A computer controlled CB radio scanner was probably the most complex. Again, back in the 80's.

Ahh, the good old days. Geez I am glad they are gone. I have just recently designed a processor system that would have filled the memory in my first computer (built in '78) in less than a microsecond. The remarkable thing is that the FR4 is the same, the copper is the same, rules of physics are the same (mostly!). Just the silicon and design skills are allowing the higher speeds.
 
Last edited:
The remarkable thing is that the FR4 is the same, the copper is the same, rules of physics are the same (mostly!). Just the silicon and design skills are allowing the higher speeds.
And a much lower parts count to accomplish much more.
 
4 Strange New Ways to Compute

https://spectrum.ieee.org/nanoclast/computing/hardware/4-strange-new-ways-to-make-a-computer

With Moore’s Law slowing, engineers have been taking a cold hard look at what will keep computing going when it’s gone. Certainly artificial intelligence will play a role. So might quantum computing. But there are stranger things in the computing universe, and some of them got an airing at the IEEE International Conference on Rebooting Computing in November.

There were also some cool variations on classics such as reversible computing and neuromorphic chips. But some less-familiar ones got their time in the sun too, such as photonics chips that accelerate AI, nano-mechanical comb-shaped logic, and a “hyperdimensional” speech recognition system. What follows includes a taste of both the strange and the potentially impactful.

Cold Quantum Neurons

Engineers are often envious of the brain’s marvelous energy efficiency. A single neuron only expends about 10 femtojoules (10-15 joules) with each spiking event. Michael L. Schneider and colleagues at the U.S. National Institute of Standards and Technology think they can get close to that figure using artificial neurons made up of two different types of Josephson junctions. These are superconducting devices that depend on the tunneling of pairs of electrons across a barrier, and they’re the basis of the most advanced quantum computers coming out of industrial labs today. A variant of these, the magnetic Josephson junction, has properties that can be tuned on the fly by varying currents and magnetic fields. Both can be operated in such a way that they produce spikes of voltage using only zeptojoules of energy—on the order of a 100,000th of a femtojoule.

The NIST scientists saw a way to link these devices together to form a neural network. In a simulation, they trained the network to recognize three letters (z, v, and n—a basic neural network test). Ideally, the network could recognize each letter using a mere 2 attojoules, or 2 femtojoules if you include the energy cost of refrigerating such a system to the needed 4 degrees Kelvin. There are a few spots where things are quite a bit less than ideal, of course. But assuming those can be engineered away, you could have a neural network with power consumption needs comparable to those of human neurons.

Computing with Wires

With transistors packed so tightly in advanced processors, the interconnects that link them up to form circuits are closer together than ever before. That causes crosstalk, where the signal on one line impinges on a neighbor via a parasitic capacitive connection. Rather than trying to engineer the crosstalk away, Naveen Kumar Macha and colleagues at the University of Missouri Kansas City decided to embrace it. In today’s logic the interfering “signal propagates as a glitch,” Macha told the engineers. “Now we want to use it for logic.”

They found that certain arrangements of interconnects could go a long way toward mimicking the actions of fundamental logic gates and circuits. Imagine three interconnect lines running parallel. Applying a voltage to either or both of the lines on the side, causes a crosstalk voltage to appear at the center line. Thus you have the makings of an OR gate with two inputs. By judiciously adding in a transistor here and there, the Kansas City crew constructed AND, OR, and XOR gates as well as a circuit that performs the carry function. The real advantage comes when you compare the transistor count and area to CMOS logic. For example, crosstalk logic needs just three transistors to carry out XOR while CMOS uses 14 and takes up one-third more space.

Attack of the Nano-Blob!

Scientists and engineers at Durham University in England have taught a dollop of chemicals to solve classification problems, such as spotting a cancerous lesion in a mammogram. Using evolutionary algorithms and a custom circuit board, they sent voltage pulses through an array of electrodes into a dilute mix of carbon nanotubes dispersed in a liquid crystal. Over time, the carbon nanotubes—a mix of conducting and semiconducting varieties—arranged themselves into a complex network that spanned the electrodes.

This network was capable of carrying out the key part of an optimization problem. What’s more, the blob could then learn to solve a second problem, so long as that problem was less complex than the first.

Did it solve these problems well? In one case, the results were comparable to a human’s; in the other, they were a bit worse. Still, it’s amazing that it works at all. “What you have to remember is that we’re training a blob of carbon nanotubes in liquid crystals,” said Elèonore Vissol-Gaudin, who helped develop the system at Durham.

Silicon Circuit Boards

Computer designers have long bemoaned the mismatch between how quickly and efficiently data moves within a processor and how much more slowly and wastefully it moves between them. The problem, according to engineers at the University of California Los Angeles, lies in the nature of chip packages and the printed circuit boards they connect with. Both chip packages and circuit boards are poor conductors of heat so they limit how much power you can expend, they increase the energy needed to move a bit from one chip to another, and they slow computers down by adding latency. To be sure, industry has recognized a lot of these disadvantages and increasingly focuses on putting multiple chips together in the same package.

Puneet Gupta and his UCLA collaborators think computers would be much better if we got rid of both packages and circuit boards. They propose replacing the printed circuit board with a portion of silicon wafer. On such a “silicon integrated fabric,” unpackaged bare silicon chips could snuggle up within 100 micrometers of each other connected by the same type of fine, dense interconnects found on ICs—limiting latency and energy consumption and making for more compact systems.

If industry really did go in this direction, it would likely lead to a change in what kinds of ICs are made, Gupta contends. Silicon integrated fabric would favor breaking up systems-on-a-chip into small “chiplets” that do the functions of the various cores of the SoC. That’s because the SoC’s close integration would no longer give much of an advantage in terms of latency and efficiency, and it’s cheaper to make smaller chips. What’s more, because silicon is better than printed circuit boards at conducting heat, you could run those processor cores at higher clock speeds without having to worry about the heat.
 
For designing new and maintaining existing nukes without actual testing:

[video=youtube;z9eZs2GBn9c]https://www.youtube.com/watch?v=z9eZs2GBn9c[/video]

[video=youtube;OoajYVQuIhA]https://www.youtube.com/watch?v=OoajYVQuIhA[/video]
 
The wire wrapped Amiga prototype. Gawd! Unsurprisingly, it gave them some connection reliability headaches.

CK29JtQUsAEP8xT.jpg
 
"The LGP-30 was used by Edward Lorenz in his attempt to model changing weather patterns. His discovery that massive differences in forecast could derive from tiny differences in initial data led to him coining the terms strange attractor and butterfly effect, core concepts in chaos theory."

LittleGP-30: An FPGA-based LGP-30 Replica

https://www.e-basteln.de/lgp30/lgp30_intro.html

The LGP-30 was a commercial computer, released in 1956. Due to its simple design and relatively low cost, it may be seen as the first “personal computer” – to be used by a single user as their “desk computer”. (It could sit by your desk, and was the size of a desk too.) Designed in the age of vacuum tubes, it needed only 113 tubes in total, of which only 24 were used in the CPU itself! This simplicity was achieved by a bit-serial CPU design, which was tightly integrated with the magnetic drum storage unit. The magnetic drum contained not only the main memory, but also the CPU’s three 32-bit registers, and several tracks with timing signals to control the instruction decoding and execution.

This page describes a replica that is true to the bit-serial implementation and its timing, but uses modern components. The CPU and the magnetic drum storage are recreated in an FPGA – by implementing the complete logic equations published by the LGP-30’s inventor (in a scientific paper and in the computer’s service manual). The magnetic drum, while implemented in on-board memory inside the FPGA, is made tangible via an optional video display of its contents.

This way, one can play with all the quirks of the LGP-30, including the timing behavior of programs, which depends critically on the position of instructions and data on the magnetic drum. For those who want to really dive into the details of the bit-serial design, the clock rate can be slowed down (all the way to single step, bit-by-bit clocking), and the contents of the CPU’s tube-based flip-flops can be inspected on an LCD (all 15 bits of them!).

LGP30.png


LittleGP-30%20handheld%20small.JPG
 
Not Your Father’s Analog Computer
Scientists and engineers may benefit from a long-abandoned approach to computing

https://spectrum.ieee.org/computing/hardware/not-your-fathers-analog-computer

When Neil Armstrong and Buzz Aldrin landed on the moon in 1969 as part of the Apollo 11 mission, it was perhaps the greatest achievement in the history of engineering. Many people don’t realize, though, that an important ingredient in the success of the Apollo missions and their predecessors were analog and hybrid (analog-digital) computers, which NASA used for simulations and in some cases even flight control. Indeed, many people today have never even heard of analog computers, believing that a computer is, by definition, a digital device.

If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.

This electronic analog computer, the PACE 16-31R, manufactured by Electronic Associates Inc., was installed at NASA’s Lewis Flight Propulsion Laboratory (now called the Glenn Research Center), in Cleveland, in the mid-1950s. Such analog computers were used, among other things, for NASA’s Mercury, Gemini, and Apollo programs:

Mjk4NjcxNg.jpeg


Energy-Efficient Hybrid Analog/Digital Approximate Computation in Continuous Time

https://ieeexplore.ieee.org/document/7463004/

Abstract:

We present a unit that performs continuous-time hybrid approximate computation, in which both analog and digital signals are functions of continuous time. Our 65 nm CMOS prototype system is capable of solving nonlinear differential equations up to 4th order, and is scalable to higher orders. Nonlinear functions are generated by a programmable, clockless, continuoustime 8-bit hybrid architecture (ADC + SRAM + DAC). Digitally assisted calibration is used in all analog/mixed-signal blocks. Compared to the prior art, our chip makes possible arbitrary nonlinearities and achieves 16× lower power dissipation, thanks to technology scaling and extensive use of class-AB analog blocks. Typically, the unit achieves a computational accuracy of about 0.5% to 5% RMS, solution times from a fraction of 1 µs to several hundred µs, and total computational energy from a fraction of 1 nJ to hundreds of nJ, depending on equation details. Very significant advantages are observed in computational speed and energy (over two orders of magnitude and over one order of magnitude, respectively) compared to those obtained with a modern microcontroller for the same RMS error.


Mjk4NjcxOA.jpeg
 
Homebrew Cray-1A
Tiny Cray Courtesy of an FPGA

fpga_cray.jpg


The actual design was implemented in a Xilinx Spartan-3E 1600 development board. This is basically the biggest FPGA you can buy that doesn’t cost thousands of dollars for a devkit. The Cray occupies about 75% of the logic resources, and all of the block RAM.

spartan3_1600.jpg


This gives us a spiffy Cray-1A running at about 33 MHz, with about 4 kilowords of RAM. The only features currently missing are:

-Interrupts

-Exchange Packages (this is how the Cray does ‘context-switching’ – it was intended as a batch-processing machine)

-I/O Channels (I just memory-mapped the UART I added to it).

If I ever find some software for this thing (or just get bored), I’ll probably go ahead and add the missing features. For now, though, everything else works sufficiently well to execute small test programs and such.

When I started building this, I thought “Oh, I’ll just swing by the ol’ Internet and find some groovy 70’s-era software to run on it.” It turns out I was wrong. One of the sad things about pre-internet machines (especially ones that were primarily purchased by 3-letter Government agencies) is that practically no software exists for them.

After searching the internet exhaustively, I contacted the Computer History Musuem and they didn’t have any either. They also informed me that apparently SGI destroyed Cray’s old software archives before spinning them off again in the late 90’s. I filed a couple of FOIA requests with scary government agencies that also came up dry.
I wound up e-mailing back and forth with a bunch of former Cray employees and also came up *mostly* dry. My current best hope is a guy I was able to track down that happened to own an 80 MB ‘disk pack’ from a Cray-1 Maintenance Control Unit (the Cray-1 was so complicated, it required a dedicated mini-computer just to boot it!), although it still remains to be seen if I’ll actually get a chance to try to recover it.

Without a real software stack (compilers, operating systems, etc.), the machine isn’t terribly useful (not that it would be all that useful if I did have software for it). All of the opcodes and registers for the Cray-1 are described in Base-8 (octal), so I did at least write a little script to translate octal machine code into the hexadecimal format that Xilinx’ tools require. All of my programming so far has just been in straight octal machine-code, just assembling it in my head. I have started work on re-writing the CAL Assembler, but that may take awhile, as it employs some tricky parsing that I’m having to teach myself.

What’s the point of owning a Cray-1 if it doesn’t *look* like a Cray-1?? Unfortunately, the square-shaped FPGA board isn’t conducive to actually making it the traditional “C” shape, but I think it turned out pretty cool anyway. My friend Pat was nice enough to let me use his CNC milling machine to cut out the base pieces (and help with assembly). It’s a combination of MDF, balsa wood and pine. There was also a healthy dose of blood, sweat and tears (and gorilla glue) involved.

img_2311.jpg
 
Fascinating potential:

[video=youtube;F7REp0Y9edA]https://www.youtube.com/watch?v=F7REp0Y9edA[/video]
 
The High School Student Who’s Building His Own Integrated Circuits
Sam Zeloof has turned his parent’s garage into a 1970s-era fab
22 Dec 2017

https://spectrum.ieee.org/semicondu...ent-whos-building-his-own-integrated-circuits

Zeloof says he has been working on his garage fab, located in his home near Flemington, N.J., for about a year. He began thinking about how to make chips as his “way of trying to learn what’s going on inside semiconductors and transistors. I started reading old books and old patents because the newer books explain processes that require very expensive equipment.”

A key moment came when Zeloof found Jeri Ellsworth’s YouTube channel, where she demonstrated how she had made some home-brew silicon transistors a few years ago. “After I saw [Ellsworth’s] videos I started to make a plan of how I could actually start to do this.”

It took Zeloof about three months to replicate Ellsworth’s transistors. “That was getting my feet wet and learning the processes and everything, and acquiring all the equipment,” he says. “My goals from there were to build on what she did and make actual ICs.” So far, he has made only simple integrated circuits with a handful of components, but he is aiming to build a clone of the ur-microprocessor, the Intel 4004, released in 1971. “It’s got about 2,000 transistors at 10 micrometers.... I think that’s very attainable,” says Zeloof.

He obtained much of his raw materials and equipment from online sellers, in various states of repair. “Acquiring all the equipment and building and fixing all the stuff I take off eBay is half of the whole journey,” he says. His equipment includes a high-temperature furnace, a vacuum chamber built from surplus parts, and a scanning electron microscope. The electron microscope was “a broken one from a university that just needed some electrical repairs,” says Zeloof. He estimates that the microscope originally cost about $300,000 back in 1996. It was listed for sale at $2,500, but Zeloof persuaded the seller to take “well below that” and ended up spending more on shipping than it cost to buy the microscope.

To pattern the circuits on his chips, Zeloof uses a trick not available in the 1970s: He’s modified a digital video projector by adding a miniaturizing optical stage. He can then create a mask as a digital image and project it onto a wafer to expose a photoresist. With his current setup Zeloof could create doped features with a resolution of about 1 µm, without the time and expense of creating physical masks (however, without a clean-room setup to prevent contamination, he says 10 µm is the limit for obtaining a reasonable yield of working devices). The scanning electron microscope then comes in handy as a diagnostic tool: “I can tell instantly, ‘Oh, it’s overdeveloped. It’s underdeveloped. I have an undercut. I have this. I have that. I have particles that are going to short out the gate area.’ ”

Since he started blogging about his project in 2017 ( https://sam.zeloof.xyz/ ), Zeloof has received a lot of positive feedback, including helpful tips from veteran engineers who remember the kind of processes used in the early 1970s. Zeloof hopes that if he can develop a relatively straightforward process for making his 4004 clone, it will open the door for other chips of his own design. “If all goes well, maybe I could make chips for people in the [maker] community—in small batches.


Mjk5NjExNA.jpeg


Mjk5NjE2NA.jpeg
 
Although I have no interest in the end product created by this laser cutter, it's a perfect example of my favorite combination: cheap electronic modules combined on a DIY motherboard and 3D printed technical parts instead of novelty prints (aka future landfill). This is beautifully done:

Coasty Version 1.2

https://www.buildlog.net/blog/2017/11/coasty-version-1-1/
 
The Shocking Truth Behind Arnold Nordsieck’s Differential Analyzer
To program this electromechanical computer, you plugged in the cords in specific patterns—and with extreme caution

https://spectrum.ieee.org/tech-hist...ehind-arnold-nordsiecks-differential-analyzer

In 1950, the physicist Arnold Nordsieck built himself this analog computer. Nordsieck, then at the University of Illinois, had earned his Ph.D. at the University of California, Berkeley, under Robert Oppenheimer. To make his analog computer for calculating differential equations, the inventive and budget-conscious Nordsieck relied on US $700 worth of military surplus parts, particularly synchros—specialized motors that translate the position of the shaft into an electrical signal, and vice versa.

Nordsieck’s “Synchro Operated Differential Analyzer” was one of many electro-mechanical and electronic analog computers built in the 1950s. Compared with their digital counterparts, they were generally far faster at solving things like differential equations. In Nordsieck’s machine, the synchro units provided mathematical functions like integration and addition. The units were wired to a plugboard, where they could be interconnected in various ways using patch cords.

As with other analog computers, each calculation required its own setup. You plugged in the tangle of patch cords to the left in a particular pattern. The cords served as the computer’s control program, with other parts of the program embodied and executed by the spinning disks, gears, rotating shafts, cranks, and the like. (You can read Nordsieck’s early description of the computer here [PDF] and his written instructions here [PDF].)

The first step of setup was to remove all of the patch cords; you then plugged in the power cord and turned on the computer. Then, you replugged the patch cords in the proper pattern for the desired equation.

Programming the differential analyzer could be painful. “The plugging operation is unfortunately accompanied by an electric shock hazard (though hardly a dangerous one),” Nordsieck warned, “since once one end of a cord is plugged in, the prongs of the free plug may have up to 105 volts of potential difference between them. Hence the operator should hold the live plug in such a way that the prongs do not touch him or anyone else or any metal.” Risk and reward were thus connected, even for a home-brew analog computer.


Mjk4NTUzNQ.jpeg
 
I learned a little about those types of computers and how they work while in the Navy in '81. The Fire Control computer for the 16" guns on the Battleships were still analog computers with the synchros, servos, electrical adders, etc. Those solved the fire control problem for the main guns and were in use well before WWII. I don't know if they solved differential equations, but they took a lot of inputs from dials and switches and instantly aimed the guns.
 
Back
Top