Life in the fast lane ...
Submarine Tracking (early 80's)
Oh yes... what an exciting start to a working career. Working in Halifax, Nova Scotia with the Department of National Defence tracking submarines with a mainframe. Not any old mainframe, but the seriously impressive Honeywell Multics, designed from the ground up by computing pioneer, Fernando Corbato, who later wrote a series of books and papers, one looking back over the first 7 years of Multics success. It had so many modern day features for a 1980's computing platform.
There was just one issue... It cost millions ($$$) for this beast and the equipment to run it consumed rooms and rooms of space, not to mention electricity and cooling. At its peak there were probably 50-60 instances worldwide, with the most famous being at the Pentagon (the urban myth being that it was dropped off in the car park, sshhhhh). I, amongst others are listed in the Hall of Fame as "Multicians" for our contribution to Multics history. During that time, I tended to the mainframe, cast an eye over hundreds of register lights, flipped rows of switches, looked at crash dumps, and, went on the Oberon-class submarines (and the handbook is available if you've bought one!). What did I learn? PL/1 as a programming language, LISP for Emacs, working with military, and, that submarines needed constant demagnetizing.
The last release of Honeywell Multics was MR12.8 on Honeywell's (later Bull) Level 68/80's and DPS8/M's. There were CPUs, IOM's, FEP's, SCU's. It had those wonderful dual spindle MSU501 mass (1 megabyte!) storage drives (oh what fun head crashes were), magnetic tape reels you could spin, with vacuum doors. Another installation, Royal Aircraft Establishment (in Farnborough, Hampshire, UK) interfaced a Multics with a 'green' (the colour chosen for the seats and panels) Cray 1-S/2000, cooled using Freon. These were known as super-computers designed by Seymour Cray. That very Cray still exists today at the National Museum of Computing (and no, it doesn't work).
Multics ultimately saw its demise in October 2000 when the very last instance (the very machine I worked on in Canada) was switched off. What's exciting is that in those early days, two other individuals that had worked on the Multics project wanted a cheaper and more accessible version. Ken Thompson and Dennis Ritchie created something called Unix.
The original name for Unix was UNICS, which stood for Uniplexed Operating and Computing System. The urban legend was that the pronunciation wasn't so great, so "Unix" came about. Multics was "MULTiplexed Information and Computing Service"
Although variants of Unix are out there... including MacOS, the other milestone occurred in the early 90's when Linus Torvalds worked on a idea that became collaborative to build an open source version called Linux, that is so prolific, that most of the top 500 supercomputers are all Linux based, and, 76% of all servers are Linux.
The #1 supercomputer has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray’s Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt!
Just to close out the Multics story... it spurred others to create Unix, that in turn inspired Linux.. which then pioneered containerisation, and guess what... Multics is available for download as a Docker image, so that you can spin it up and have the full experience I had. Enjoy.
Recommended by LinkedIn
Back to Basics
Shortly after my significant contribution to the Cold War, and working across some additional Multics' instances, it was time to move on... and get back to the metal. Yes, I had employment that involved batch jobs, the content of which was assembly language. The paradox of it all was these were mainframes built on the same Honeywell technology, but instead of running Multics they were running GCOS. If ever there was a task that was character building, this was it. Having gone from high level interactive, early DevOps pioneering, PL/1 programming, to... write some CPU commands and get them run overnight... Oh my goodness. If you want to share the experience, feel free to consult the manual.
Networks are where it's at (late 80's)
I probably spent all of a year with assembly language before I joined a start-up of 12 individuals. The start-up entrepreneur decided that networking was where it was at, the next revolution. Yes there were instances of Arpanet and Janet, but it was a closed shop, largely for government and education. But heh, imagine the possibilities of moving data (funny cat videos) between computers ! Our visionary at the time, decided from a technical point of view, that in building products on top of network topologies, we had the following choices; token ring (IBM), token bus, and Ethernet (CSMA/CD). The idea of Token Ring seemed ridiculous (not invented here syndrome), and... Ethernet... all those collisions! No, we were firmly backing the Token Bus, deterministic outcome, protocol.
What I didn't realise until later that we were in fact in the same thought processes as Betamax vs VHS. Betamax was technically superior, but complicated. VHS was simple, low quality. VHS won that battle through mass adoption. Ethernet had the same success.
We designed the hardware (Motorola 6809 based), we designed all the software (operating system, interfaces, network layer) and we had to squeeze everything into 32k bytes. A few years earlier I was working with a computer the size of a building, and now, I'm working with something the size of your hand. We named our operating system ("DSApac") and mighty proud we were of its capabilities. I should give my dear friend Dave Hangartner a shoutout for his valuable contribution to all this. We layered everything on top of the "triple X" (X3, X25, X29) protocols, on top of Token Bus. We wrote in C, cross compiled to 6809, we wrote in assembler. We installed network cables, way before CAT this, that and the other, came about. Cabling was great thick 'yellow' cables (10BASE5 'ThickNet') and used a 'vampire tap' (so cool) for connections.
The agile, scrum, DevOps approach involved; compiling the code in Bracknell, burning it onto PROM chips, driving the chips to our customer (thank-you Customs and Excise Department ) in Shoeburyness, getting back in the car, drive to Dover, and drive back to Bracknell. CI/CD took 300 miles and 12 hours! Next day, the support ticket came in... bug... compile the code, get in the car... The irony was... what we needed, was a network!
Whilst all this is going on, someone had to spoil the party and bring Ethernet (and VHS) to the masses. After about three years, we downed tools, folded, and moved on. Market Data here I come!
Reflection
What's great about these experiences was, having to understand how everything was put together. You couldn't use it until you had built it, programmed it, assembled it. In doing that, you became familiar with what was going on under the surface (back to submarines!). I still build electronics, I still code, I dissemble, optionally repair, and assemble. There was lots of problem solving and solutioning. There were many lessons learnt along the way (the Betamax vs VHS lesson), ie, just because it's technically superior doesn't mean people will buy it.
Next episode
In the next episode I'll be talking about how missile simulators contributed to market data systems.
Chief Architect, Market Data Services at J.P. Morgan
7moNice piece, Dave. I share some of those memories...
Cloud Solutions Architect | NASDAQ | IIT-Madaras '21 / AWS /GCP/ OCI Cloud Certified |Kubernetes| CICD Automation/ Terraform #FundedNext Elite
8moImpressive, thanks for sharing your experiences
Thank you for an entataining walk thought, im looking forward to more episodes.
Marketing & Sales Leader | Founder & Consultant | Driving Growth Through Strategic Marketing, Business Development, and Customer Engagement | Expertise in Fintech, SaaS, and Technology Ecosystems
8moI always knew you were Hall of Fame material!
Good article - Remember how we needed to review all the updates to the Multics source code before installing them in that secure environment? And our first attempts at linking that mainframe to an array of Unix processors so that we could plot the locations of ships and submarines on a map? Good times!