On the Disquiet Junto Slack, one of our members posted that they’d had a dream:
I had a dream about a piece of gear last night. I wouldn’t say that it was “dream gear,” though it was still cool. It was a small black metal box, about the size of three DVD cases stacked on top of each other. There were a few knobs and sliders, a small 3-inch speaker, headphone out, and a telescoping antenna, so it kinda looked like a little radio at first. The antenna was there for radio reception but there was other stuff going on. It was intended to be used as a meditation/sleep aid/ASMR machine. There were sliders for a four-band EQ and a tuning knob for the radio. The tuning knob had a secondary function that tuned a drone sound (kinda sounded like a triangle wave fed through a wavefolder/resonance thinger). The other feature of this box was something like a numbers stations generator. Another slider was for the mix between the drone and a woman’s voice speaking random numbers and letters from the NATO alphabet in a Google Assistant-/Alexa-/Siri-type voice but with far less inflection. The four-band EQ was to be used like a mixer as well in that it was how a person could adjust how much of the radio signal was audible over the drone/numbers by using the output gain of the EQ. There was also a switch that fed the drone/numbers signal into the EQ as well. The EQ was intentionally low-quality so that when you took it above 0dB, it would distort.
The Disquiet Junto Slack, #gear channel
Now what was weird was that I’m been doing something like this in AUM; I had a quiet ambient Dorian sequence driven by ZOA on several instances of KQ Dixie (a DX7 emulator), and was using Radio Unit (a radio streaming AU) to layer in some birdsong. I realized I could mostly emulate the dream box if I added another Radio Unit to pull in some random stations, but generating the “numbers station” audio was more of a challenge – until I remembered that OS X has the say command, that will let you use the built-in speech synthesizers to pronounce text from the command line.
I sat down, and after some fiddling (and looking up “how to add arbitrary pauses” so the rhythm was right), I created NATO::Synth to create the strings I wanted and pass them to say. It has a few nice little tweaks, like caching the strings created so it can decide to repeat itself, and properly inflecting the start and end of each “sentence”.
I saved the generated audio (recorded with Audio Hijack) to iCloud, loaded it into AUM, and then recorded the results. Very pleased with it!
I needed a little mental relaxation this weekend, so I spent a while playing mainframe model trains by bringing up Hercules.
I initially tried the MVS Turnkey system, but ran into some issues — mainly that there are no working (free) 3270 emulators for Big Sur. Since I couldn’t set up any consoles, and Homebrew x3270 didn’t seem to work under Xquartz, and I had no intention of spending $29 just to fool around with MVS for a bit, or multiple hours trying to get X11 builds working, I dropped back to Jay Maynard’s MVT installation instructions.
They’re a bit out of date at the moment, and the Right Thing would probably be to make the fixes in both the instructions and the files, and move the corrected instructions over to a wiki somewhere. For now I’m leaving my notes here so I don’t forget what I did, and so I can do that later if I get the time. More fooling with the console and running jobs, less file twiddling.
dlibs/DN554.XMI was recovered. The srclibs/dn554 and related files in srclibs/TAPEFILE.ZIP have not been modified (recovered) to match, and should be at sone point.
srclib/fo520/IEYUNF.txt has been renamed to original.IEYUNF.txt and a version recovered from the MTS distribution has been added as IEYUNF.txt. The related file in srclibs/TAPEFILE.ZIP needs to be fixed as well.
Maynard’s instructions are definitely of their time, when cutting and pasting commands was not a thing. The relevant commands are mentioned once, and one is expected to remember them. If I update these, I’ll inline the relevant command and note which console they get typed into. Switching back and forth between the Hercules “hardware” and the MVT console was a tad confusing at times. A significant omission: the devinit 00c foo.jcl command works okay for the MFT starter system and MVT without HASP, but once HASP is running the devinit must include eof at the end of the command or the reader hangs and the job never starts. Also, one of the HASP job filenames is called out by its right name in the section header, but by the wrong one in the text. Looks like a cut and paste error.
You should comment out or remove the 3270 definition at 0C0 in the mvt.cnf Hercules config file; if it’s there when you boot MVT, but no 3270 emulator is connected to it, the machine will hang with wait state 21. Took some googling to find that. TCAM will grumble about it when it starts up, but it doesn’t hurt anything.
You will need to install telnet on your Mac, since Big Sur removes it. telnet localhost 3270 to connect the virtual 3215 console. It might be worth trying to configure some 3215s for TSO and see if that works. We don’t really need to emulate real serial terminals.
Doing the mn jobnames, t and mn status commands will save you a lot of wondering whether anything is going on or not, even after HASP is up.
Be sure to copy the prt00e.txt virtual printer text file when:
You’ve finished doing both stages of the sysgen on the MFT system.
You’ve finished installing HASP.
You’ve finished installing TCAM and TSO
Otherwise you lose all those useful assembler outputs of the HASP hooks and the TSO interfaces to it. The sysgen and TCAM build output isn’t critical, but it’s nice to have.
Other things — http://hercstudio.sourceforge.net/ is supposed to be a hardware console emulator (lights and dials and stuff) interface for OS X and Linux; it’s written in Qt, which is fine, but the Makefile that qmake builds works to build it, but wants to install things in /usr/bin and Big Sur will not let it go there even with sudo. I might consider just writing a native iOS one instead.
I’m up a couple hours later than I intended, but I have my notes, and as Adam Savage says, “the difference between science and screwing around is writing it down!”
I happened on a demonstration of a mental math trick on Reddit for squaring numbers in your head and was immediately reminded of a technique I learned in 1972 from a great book on speed arithmetic that I have unfortunately forgotten the name of.
The video’s formulation uses the identity n^2 = (n^2 - a^2) + a^2 to make the multiplication simpler, but the book had an extremely elegant way to notate a different identity that works nicely for doing the squares of two-digit numbers in one’s head, and rapidly doing multi-digit squares on paper.
The Reddit example squared 32 by changing it to 32 * 32 = 30 * 34 + 4 = 1024, which is clever, but check this out!
Start with the identity (a + b)^2= a^2 + 2ab + b^2 and treat 32 * 32 as (30 + 2)^2.
Visualize this in your head:
0904
12
That’s the a^2 + b^2 on the first line, and 2ab on the second. Now just add it up normally, with blank spaces equal to zeroes, and you get 0, 10, then 102, then 1024.
The left-to-right add means you never have to remember the carry value, just the changed result. Let’s try 47.
1649
56
1, 21, 220, 2209. Simple.
The Wikipedia page on mental arithmetic is a great resource that has this technique, but lacks the notation visualization shown here which honestly is what makes it easy. The same technique works for larger numbers too. There’s more to remember, which may make it too hard to do in your head, but it makes squaring large numbers on paper trivial.
Let’s say we want to square 123:
010409
0412
06
1, 14, 151, 1512, 15129. (a^2 + b^2 + c^2 + 2ab + 2bc + 2ac). Squares on the top row, 2ab on the left in the middle, 2bc on the right in the middle, 2ac on the bottom.
I will admit that I didn’t properly get how to do the multi-digit notation right 45 years ago, but I hadn’t really understood the mapping of the identity to the positions on the page and was doing it by rote. The notation is the slickest part of this, as it automatically handles the proper number of multiplications by 10 for you.
The left-to-right addition and a trick of doing mental addition by repeating the current total to oneself when adding the next number to keep from losing one’s place (ex. 45 + 37 + 62 – 45, 75, 75…82, 142, 144 and cast out 9s — 0, 1, 9, and 1+4+4 =9) were all also in that same book. I really wish I could remember what it was!
[This was originally asked on Quora, but the result of figuring this out was interesting enough that I thought I’d make it a blog post.]
It’s a very interesting question, because there are so many differing kinds of computing capability in that one device. The parallel processing power of the GPU (slinging bits to the display) and the straight-ahead FLOPs of the ARM processor.
Let’s try some back of the envelope calculations and comparisons.
The iPhone 5’s A6 processor is a dual-core, triple-GPU device. The first multiprocessor computer was the Burroughs D-285 (defense-only, of course).
A D-285 had 1 to 4 processors, running at ~ .070 s /1 operation = ~14 FLOPS for divide, the slowest operation, 166 FLOPS for add, the fastest, and ~25 FLOPS for multiply. Let’s assume adds are 10x more frequent than multiply and divide to come up with an average speed of 35 FLOPS per processor, so 70 FLOPS for a 2-processor D825, handwaving CPU synchronization, etc.
Let’s take the worst number from the Geekbench stats via AnandTech for the iPhone 5’s processor: 322 MFLOPS doing a dot product, a pure-math operation reasonably similar to the calculations being done at the time in 1962. Note that’s MFLOPS. Millions. To meet the worst performance of the iPhone 5 with the most optimistic estimate of a 2-processor Burroughs D825’s performance, you’d need 4.6 million of them.
I can state confidently that there were not that many Burroughs B362s available in 1962, so there’s a hard lower bound at 1962. The top-end supercomputer at that point was probably the IBM 7090, at 0.1 MFLOPS.
We’d still have needed 3200 of those. in 1960, there were in total about 6000 computers (per IBM statistics – 4000 of those were IBM machines), and very few in the 7090 range. Throwing in all other computers worldwide, let’s say we double that number for 1962 – we’re still way behind the iPhone.
Let’s move forward. The CDC 7600, in 1969, averaged 10 MFLOPS (with hand-compiled code, and could peak at 35 MFLOPS).
Let’s go with the 10 MFLOPS – to equal a single iPhone 5, you’d need 32 of them. Putting aside the once-a-day (and sometimes 4-5x a day) multi-hour breakdowns, we’re in the realm of possibility that the CDCs in existence alone at that time could equal or beat an iPhone 5 (assuming they were actually running), so the likelihood is that all computing in the world probably easily equalled or surpassed an iPhone 5 at that point in straight compute ability, making 1969 the top end of our range.
So without a lot of complicated research, we can narrow it down to somewhere in the seven-ish years between 1962 and 1969, closer to the end than the start. (As a note, the Cray-1 didn’t make the scene till 1975, with a performance of 80 MFLOPS, a quarter of an iPhone; in 1982, the Cray X-MP hit 800 MFLOPS, or 2.5 iPhones.)
And we haven’t talked about the GPUs, which are massively parallel processors the likes of which were uncommon until the 1980’s (and even the top-end graphics machines of the time 1962-1969 era couldn’t equal the performance of the iPhone’s GPU with weeks or months to work on rendering – let alone there not being output devices with the pixels per inch of the iPhone’s display capable of responding in real time). But on the basis of raw compute power, somewhere after the Beatles and before the moon landing. Making a finer estimate, I’d guess somewhere in late 1966, so let’s call it somewhere around the last Gemini mission, or Doctor Who’s first regeneration.
On rereading the question I saw that the asker wanted the numbers for an iPhone 4 instead of a 5. Given the amount of handwaving I’m doing anyway, I’d say we’re still talking about close to the same period but a bit later. Without actual numbers as to the computers in use at the time, which I don’t think I can dig up without much more work than I’m willing to do for free, it’s difficult to be any closer than a couple years plus or minus. Definitely before Laugh-In (1968), definitely after the miniskirt (1964).
iPhone 5s update: the 5s is about 1.75 times faster than the 5, so that puts us at a rough 530 MFLOPS. The computing power estimate becomes much harder at this point, as minicomputers start up about 1969 (the PDP-11 and the Data General Nova). The Nova sold 50,000 units, equivalencing out to about 130 MFLOPS; total PDP-11’s sold “during the 1970’s” was 170,000 for a total of 11 GFLOPS (based on the 11/40 as my guess as to the most-often-sold machine); divide that by ten and then take half of that for a rough estimate, and the PDP-11s by themselves equivalence to one 5s. So I’ll say that the moon landing was probably about the equivalence point for the 5s, but the numbers are much shakier than they are for the 4 or 5, so call it around the first message sent over ARPANet at the end of October 1969. (Side note: this means that the average small startup in Silicon Valley today – 20 or so people – is carrying about the equivalent power of all the PDP-11’s sold during the 1970’s in their pockets and purses.)
Past this, world computing power is too hard to track without a whole lot of research, so take this as the likely last point where I can feel comfortable making an estimate.