Blog

  • WebWebXNG: history and goals

    [It seems like a good idea to lay out some history of this project and its goals, so I’m posting this before today’s progress update.]

    WebWebXNG Overview

    We’ve been concentrating on the nuts and bolts of the conversion, but maybe we should step back and look at the project as a whole, so we can get some perspective on where it started, where we are, and where we might go from here.

    “Ed’s Whiteboard” and Wikis

    Around 1997, we had just recently converted to Unix machines from IBM mainframes at the Goddard Space Flight Center, where I was then working as a contractor on a relatively large Perl project. Collaboration tooling was severely lacking. Bug trackers were very difficult to install or very expensive.

    Our process was mostly email, and one actual physical whiteboard in our project lead’s office had become the definitive source of truth where everything was recorded. There were numerous DO NOT ERASE signs on the door, on the whiteboard, next to it…it was definitely a single point of failure for the project, and it literally required you to go walk over to Ed’s office to get the project status, or to take notes on paper and transfer them to one’s own whiteboard/files/etc. If you needed to know something about the project, “Ed’s whiteboard” was where you found it.

    Our process was weekly status meeting, we agree on who’s doing what, Ed – and only Ed! – writes it on his whiteboard, and as the week goes on we mail him with our updates, which he’d then transfer to the board. It did give us a running status, a list of bugs, and assignments, but it was clear that we needed something better, more flexible, and less fragile than something that could be destroyed by an accidental brush up against it or a well-meaning maintenance person.

    Bettering the whiteboard

    It was about this time that I stumbled on the original c2.com WikiWiki. The idea that a website could be implemented in such a way that it could extend itself was a revelation. (Most websites were pretty simple-minded in 1997; you’d code up a bunch of HTML by hand and deploy it to your web server, and that was it.) It took a few days for the penny to drop, but I realized that, hey, we could move Ed’s whiteboard to a website! Instead of writing things on a physical whiteboard and worrying that it might get erased, we could record everything on a virtual one, share it among all the team members, and have historical backups of the project state.

    We could discuss things in sort-of real time and have a record of the conversation to refer to later, and link the discussion to a bug, or a feature request, or…

    We could track bugs on one page, assignments on another, have room to discuss implementation, record the minutes of our status meetings, and just generally document things we needed to share amongst ourselves. It was Ed’s whiteboard plus, and best of all, we could do it for free!

    We did have a few extra requirements. The biggest one was that we needed to be able to provide different levels of access to accessing and editing the site, depending on who you were. After some searching around, I found David McNicol’s WebWeb.

    WebWeb and its evolution to WebWebX

    WebWebX started off as an extended version of WebWeb, a derivative of Ward Cunningham’s original WikWIki, written by David McNicol at the University of Strathcyled . WebWeb was a monolithic CGI script that added a few extra features to the original WikiWiki codebase, most notably access levels, which we needed more than anything else. We wanted to have some pages publicly readable, but only writable by project members, and some project tracking available for read by a subset of users and editable only by a yet smaller subset — the dev team.

    WebWeb did have some limitations that made it not quite good enough out of the box. The biggest issue was the original data storage, which used Storable and Perl DBM files; pages were “frozen” with Storable, making them into strings. They were then stored in the DBM file under a key composed of the page name and a version number; this made operations like history, removing old versions, searching, etc. all relatively easy, since a DBM file looked like a hash to the Perl code.

    The biggest problem with this was that the size of a page was limited by the per-item storage capacity of the underlying DBM implementation, meaning that a page that got “too big” suddenly couldn’t be saved any more. This was a real usability issue, as it was very difficult to predict when you’d exceed the allowable page size — and worse, different Perl implementations on different machines might have radically different limitations on the allowable page size.

    I undertook a rewrite of WebWeb to make it more modular, easier to maintain, and more performant, most specifically focusing on the page size issue. It was clear that we’d need to fix that problem, but most of the rest of WebWeb was fine as it was.

    RCS and PageArchive

    I started out by “factoring out” (not really, because there was no test suite!) the DBM code into a separate class which I dubbed PageArchive, creating an interface to the page management code as a separate class. A reasonable choice to allow me to change the underlying implementation; I’d learned enough OO programming to have the idea of a Liskov substitution, but none of us really had internalized the idea that writing a test suite for a module was a good idea yet.

    This complexified the mainline code a bit, as accessing the pages needed to use function calls instead of hash accesses, but it wasn’t too bad — the overall size of the project was fairly small, and the vast majority of the lines of code was inlined HTML heredocs.

    With the page storage code isolated in PageArchive, I could start replacing the mechanism it used for storage. One of the other tools we’d recently started using was RCS to do source code management, mostly because it was easy to start using and it was free with the operating system. We might have been able to use CVS, but it would have required a lot more coordination with our system administrator; with RCS, we just had to decide to start using it.

    From 25 years later, RCS looks hideously primitive. Each file is individually versioned, with a hidden file next to it containing the deltas for each version. For our code, even though it helped us a lot on a day-to-day basis, it made release management very difficult— a code freeze had to be imposed, and a tar file built up out of the frozen codebase. This then had to be deployed to a separate machine and manually tested against a list of features and bugs to ensure that we had a clean release. This all fell on the shoulders of our release manager, who was, fortunately for us, very meticulous! Not something that would work nowadays, but for 1997, it was a huge improvement.

    For storing pages in a wiki, on the other hand, RCS was great. We really were more interested in maintaining the history of each individual page rather than of the wiki as a whole anyway, so RCS perfectly reasonable for this. I updated PageArchive to use RCS versioning to store stringified versions of the pages instead of storing them in DBM. Because I’d already abstracted the operations on the pages, it was easy to swap out one implementation for another. Pages could now be whatever size we wanted!

    Edit races and locking

    The wiki was a huge success. We were able to to move everything from the physical whiteboard to “Ed’s Whiteboard” the website. Unfortunately success came with a problem: we were updating the pages a lot, and very often we’d have an edit race:

    1. Alice starts editing a page.
    2. Bob starts editing the same page.
    3. Bob saves their changes.
    4. Alice saves their changes.
    5. Bob is confused because their edits aren’t there, but something else is. Did Alice remove their edits?

    This was recoverable, as the “previous” version of the page had Bob’s changes, but there had to be a phone call or an email: “Alice, did you change my edits to X? You didn’t? OK, thanks!”, and then Bob had to look back through the page archive for his edits, copy them out, and then re-edit the page. In the meantime, Carol has started yet another set of edits, and after Bob makes his changes, Carol saves…lather, rinse, repeat.

    On busy days, our “productivity tool” could end up being pretty unproductive for slow typists. We needed a way to serialize edits, and I came up with the idea of the edit lock. When a page was actively being edited, it was marked as locked (and by who) when someone else accessed it, even for read. This made it clear that editing was in progress and prevented edit races by simply not allowing simultaneous edits at all. Because the person editing the page was flagged, it was possible for a second person to call or email them to ask them to save and release the lock. This did have the problem that if someone started an edit and went to lunch or went home for the day, the page would be locked until they came back. This was fixed by adding a “break edit lock” feature that turned off Alice’s lock and allowed Bob to edit the page. The wiki emailed Alice to let her know that the edit lock had been broken.

    This only worked because we had a limited number of users editing the wiki; it wouldn’t have worked for something the size of Wikipedia, for instance, but our site only had about a half-dozen active users who all knew each other’s phone numbers and emails. If someone had a page busy for an extended time, we generally called to ask them to save so someone else could edit — lock breaking was infrequent, and mostly only used when someone had had a machine crash while they were editing.

    This was our primary project tracking tool up until around 2005, and it served us pretty well.

    Fast-forward 25 years…

    Tooling has improved mightily since 1997, or even 2005. GitHub, GitLab, JIRA…lots of options that integrate source control, bug tracking and even wikis for documentation. Every once in a while, though, a standalone wiki is handy to have. There are services that provide just wikis, such as Notion, but a wiki that provides both public and private access, for free, is hard to find.

    I’m one of the DJs and maintainers at RadioSpiral (radiospiral.net), and we have a lot of station management stuff to track: artists who have signed our agreement (we are completely non-profit, so artists who have released their music under copyright have to waive their ASCAP/BMI/etc. rates so we don’t personally go broke having to pay licensing fees); URLs and ports for the listening streams; configurations to allow us to broadcast on the site; and lots more.

    Some of this info is public, some very private — for instance, we don’t want to publish the credentials needed to stream audio to the station for just anybody, but having them at hand for our DJs is very useful. Putting all this in a wiki to make it easy to update and have it centrally located is a big plus, but that wiki needs what the old one at Goddard had: delineated access levels.

    High-level project goals

    • Get WebWebX working as a standalone application that doesn’t require extensive CGI configuration to work. The original WebWebX was tightly integrated with Apache, and required Apache configuration, adding of ScriptAlias, and a lot of other tedious work. Ideally, the new version should be as close to “install and run” as possible.
    • Modernize the codebase; most importantly, add tests. WebWebX worked incredibly well for code without tests, but I no longer am so sure of myself! In addition, the HTML “templating” is all inline print() statements, and I’d really prefer to do this better.
    • Convert the code to a contemporary stack with a minimum of requirements to install it. I’ve chosen Mojolicious because it’s quite self-contained. I did not choose Catalyst or Dancer; both of those are great, but they definitely require a lot more prerequisites to install.
    • Make this project something that’s generally useful for folks who just want a controlled-access wiki that’s easy to install, easy to deploy, and easy to manage.

    Ideally, I want something that can be deployed to Heroku or Digital Ocean by checking out the code, setting some environment variables, and running it. We’ll see how close I can come to this ideal with a Perl stack.

  • More codebase cleanup, perlcritic, and POD coverage

    TIme to take a look at all the other stuff lying around in the codebase.

    • INSTALL is pretty tied up with the details of hooking a non mod_perl CGI script into Apache. Some of it will be salvageable, but mostly not. The section on first-run may be useful, but definitely there’s going to need to be a way to set up a privileged account before the first run.
    • LICENSE will need to be updated with the original release date. I’ll go look at the ibiblio site for that.
    • Making a note to move THANKS into README.
    • USING is mostly okay. It’s a very quick intro to using a wiki. The section on logging in will need some editing, as it assumes the WikiWiki model of “anyone can add an account”.
    • The bin/insert-mail script was a hack specifically for our use as a bug tracker. We probably don’t need it, and there are significant security issues to address if we decide we do. Deleting this; we can always get it out of Git if we change our minds.
    • The cgi-bin directory can go away; the script there really just calls the code we moved to App::WebWebXNG.pm.
    • The docs directory contains a set of fixed HTML documents. They probably want a reformatting and possibly a rewriting, but we can leave them as they are right now.
    • Everything in old-lib has been moved elsewhere; that can go away.

    Back to the code!

    I revamped the dist.ini file to use [@Basic] and removed the duplicate actions. It now looks like this:

    [AutoPrereqs]
    [@Basic]
    [PruneCruft]
    [ExtraTests]
    [Test::Perl::Critic]
    [PodCoverageTests]
    [PodSyntaxTests]
    [@Git]

    The next step is to get everything tidied up and passing the perlcritic tests. To that end, I moved the start of main() in App::WebWebXNG to only encompass the actual old main program and added a leading underscore to _setup_kludge. That keeps us from having to document something we’ll be removing anyway, and un-nests the rest of the methods in that module to get rid of a huge number of perlcritic errors.

    I’ve also moved the old PasswordManager code to App::WebWebX::AuthManager; the old code manages an Apache htpasswd basic auth file, but the structure will do as an interface to some kind of more modern authentication management. (Notable in there: no password requirements! It was a simpler time.)

    Next is to remove the code we definitely don’t need or want: the insert-mail script, the CGI wrapper, everything in old-lib, and the license file from GitHub (Dist::Zilla will generate one).

    >File::LockDir fiddles with the symbol table, and I don’t want to do that any more. I’ll restructure it as a class. It’ll also need some tests, and I’ll have to start writing those and fixing up the code to pass.

    The perlcritic tests and POD coverage tests are running, and failing, so I’ll need to start fixing those. I started on this and actually realized that I hadn’t committed the tidied code before starting to work on it, so I created a branch, wound the history back with git reset, committed the tidied code, and then cherry-picked back to the current state. This let me keep the critic and POD changes separate.

    For the modules failing the POD tests, I’d actually added block comments that would work perfectly fine as POD when I originally wrote the code, so I just needed to do the mechanical task of converting them. There were a lot of them, but it was very easy editing, so I just went ahead and cleaned that up by hand.

    Critic fixes were primarily making all of the loop variables lexical and removing bare word file handles. There are two methods that store and reload the global config that use a string eval(); they’re ## no critic marked for now, but I want to think about a better setup for that. My current reflexes say “database”, but I’m trying to minimize dependencies. Let’s see how that goes and defer the decision.

    I started adding some tests: if PageArchive gets no directory, it should die.

    At this point, we pass all of the tests that we have; the code is barely tested, but the POD and critic tests, which had a lot of errors, are all fixed, and the couple of validation tests I added are passing.

    That will do for this pass.

  • Clearing the decks: removing ancient Perlisms and stripping down

    The next task is getting App::WebWebXNG to build, let alone pass any tests.

    First up: I’ve changed the name of the page archive library, so I need to change the use statement, and fix up the new() call (making it direct invocation syntax while I’m at it).

    The defined %hash syntax is no longer valid, so we need to fix that. The usages we have in this script are really “is there anything in this hash” checks, so keys will work to fix these.

    it uses a lot of globals. This results from repackaging a Perl 4 script and making as few changes as possible to get it running. The vast majority are defined in webwebx.pl, but there are a couple – no, sorry, a bunch – that come from the CGI script. We need to add a use vars for these. Found two on the first run, then after the defined %hash issues were fixed, there were a bunch more. Adding them as we go.

    “Replacement list is longer than search list”. There’s an interesting one! This is a tr that should be an s//g.

    Okay,. load test passes! It doesn’t actually do anything, but that much is working. Good.

    Let’s go look at the CGI script and see what it’s doing to initialize the globals we had to add; those are going to have to be set up somehow (for now, I think I’ll just add a setup_kludge function to do it). The variables we’re setting up here are mostly related to knowing where the script is hosted so that the internal link URLs are right, the location of the static files, and the location that stores all the data. Mojolicious should allow us to dispense with a lot of this and build the URLs as relative rather than absolute.

    Now for some serious cleaning up. Let’s set up Perl::Tidy and Perl::Critic. Perl::Tidy is pretty critical, because the indentation is all over the place, and it’s hard to read the code. And Perl::Critic is just good insurance. I’m using policies similar to those we used at Zip.

    Running those found a lot of things that needed neatening up…and several outright bugs!

    1. App::WebWebXNG had one perlcritic issue, a my with a trailing conditional. Not too bad for 25-year-old code.
    2. However, PageArchive::RCS had a lot of things to fix up.
      1. No use warnings. Okay, that one’s pretty easy.
      2. Tried to set the Rewound attribute for a directory; the code was after a return so it couldn’t be reached. When it was moved to be reachable, it was using a variable that didn’t exist! Needed to be using the instance variable for the object.
      3.  All of the open() calls used the old two-argument syntax. It’s still supported but it’s lousy practice, so I edited all of the open() calls in App::WebWebXNG and in PageArchive::RCS.
      4. There were several places where an if(my $foo... referenced $foo outside of the block. This changed sometime between Perl 5.6 and 5.38 (which I’m testing this with), so all of those had to be moved outside of the block.
      5. Finally, one method in PageArchive::RCS tried to use $self without creating it in scope. This would result in never getting error messages back, and may have hidden other bugs. We’ll see.

    We’re back to all tests passing, perlcritic happy, and perltidy happy.  Created the repo on GitHub, pushed the work to date. Hang on, need to add a WIP marker…okay, got it.

    A good morning’s work!

  • Just barely not Perl 4: diving into the old WebWebX codebase

    Hoo boy.

    I’ve put the basics in place now: there’s an App::WebWebXNG.pm nodule, and I’ve moved the page management and file locking modules into /lib. The load tests for the existing library modules pass, but there aren’t any functional tests yet.

    Now, on to the old core script, webwebx.pl.

    I’ve imported it as close to as-is as possible into App::WebWebX.pm, and added a main if not caller() to run the old script as the main program.

    This script was just barely converted from Perl 4. There’s a giant pile of globals, and the majority of the “database” stuff it does is in DBM (if anyone still remembers that). I don’t even know if DBM still exists in more modern Perls!

    All of the HTML generation is from interpolated print statements. There’s no CSS (browsers didn’t even support such a thing at the time; it was Mosaic or nothing. Okay, maybe IE, but the number of Windows machines on base at GSFC that were being used by our user community was probably countable on one hand.).

    This should be convertible to Mojo::Template relatively easily, which is good. And the command dispatch is driven off a hash of code references, so that should work fairly well too.

    It’s not terrible, it’s just old. Well, off to see how much will work!

  • WebWebXNG: revisiting a 25-year-old project

    The past

    Back in 1998 or so,  or long after I’d switched for system administrator to web developer, I stumbled across Ward Cunningham’s original WikiWiki. It was, at the time, a mind-blowing idea: a website that people could edit and extend themselves, without any programming at all. Simply sign in to the wiki, and start editing. Adding a specially-formatted word automatically generated a link to another page, either an existing one…or a brand new one, that you could start expanding on yourself.

    I can’t say that I conceived of Wikipedia when I saw this, but I absolutely zeroed in on how we could use it for several problems we had:

    • We didn’t have a bug tracker/project tracker for our project. With a wiki, we could just have a page that linked to all of the features we were working on and the bugs we were fixing.
    • We didn’t have a formal release process at all, or much in the way of source control. We started using RCS and noting the version number(s) of files that fixed bugs. We still had to build up a canonical checkout of everything, but we at least had some tracking that way.
    • We really wanted (and needed) an easy way to build a reference manual for our users that was easy or them to browse and search, and easy for us to keep up to date.

    We (okay, I) decided to try a wiki. The original WikiWiki didn’t have a number of features we really felt like we needed for this to work: no authorized users and no access control being the big issues. I found WebWeb, original written by (I will have to look at the WebWebX source!), which had part of, but not all of what I needed, and with their permission, I created an extended version, rather unimaginatively called WebWebX.

     

    The present

    RadioSpiral has a lot of stuff that we need to have documented: how to connect to the streams, configs, where Spud lives and how to reboot him, policies, etc., and it’d be nice to have all that in a wiki instead of in documents (our last update of our docs was 5 years ago!). I remembered that we’d had a private Notion instance at ZipRecruiter — it wasn’t great, but it was usable, and private. So I signed up for Notion…and discovered for a mere $720 a year, I could have the level of support that included a private wiki.

    Given that RadioSpiral’s income is in the red at all times — it’s 100% a labor of love, and a place for us to have fun while playing good music — that was just not a tenable solution. I didn’t want to run the old Zip wiki either — it was written in Haskell, and I didn’t feel like learning a whole new programming paradigm just to get a private wiki.

    The I remembered, well, I have the old WebWebX source out there, and it did have access control. Maybe I could get it running again, and modernize it in the process. I’ve pulled the source from ibiblio and started working on the conversion. First things first, I’ve installed Dist::Zilla so I can build it out in some kind of reasonable fashion, and I’ve decided to base the whole thing on Mojolicious to try to make it as self-contained as possible.

    My goal is a private wiki that can be deployed with a dead minimum of effort. Which will probably entail a lot of effort to write and fix up, but that’s time better spent than trying to find a free alternative somewhere that I’ll have to accept compromises in, or yet another paid service that I’ll have to pay for myself.

    So far, I’ve created the initial README.md, initialized Dist::Zilla in the new App::WebWebXNG repo, and imported the old code into the repo to start work. I’m choosing to implement the main program as a modulino, to make it easy to test (did I mention that the old code has exactly zero tests?).

    Updates to follow!

  • Useful shortcut for cleaning up files

    Useful shortcut for cleaning up files

    The situation

    I’m in the process of moving from one computer to another. My old 2010 MacBook Pro is still running very well with a replacement SSD for its internal disk, but it’s stuck at Catalina and won’t be going any further, mostly because the firmware has a password which I’ve lost, and Apple can no longer unlock machines that old.

    So if I want to do development in a recent Xcode, and I very much do, I need to upgrade. One side-effect of my recent layoff from ZipRecruiter was that they let me keep my machine, so I now have a 2021 M1 Pro that will run Ventura. (It’s possible that I’ll never need another machine, given that Apple machines stay supported for ~7 years; in seven years I’ll be 73, and either dead or unlikely to be programming on a daily basis.)

    The problem here, though, is that the internal disks are considerably different sizes. The old machine’s internal disk was 2TB, because that was the biggest affordable SSD I could get at the time. The new machine’s disk is 0.5 TB, and a straight copy from the old machine to the new is not an option — the immutable law of storage is that if you have it, it fills up — so I need to clean up the stuff I’ve got on the internal and move it elsewhere.

    I’m using a mixed strategy for this:

    • Anything on the internal disk will be there because it has to be.
    • Anything I want to keep and be able to access, but that doesn’t need to be available right now is going on Dropbox. (I will have to back this up separately; I’m going to work out a script to back it up with Backblaze.)
    • Anything that I need quick access to will go on an external 2TB SSD, which I will back up with Backblaze.

    So far, I’ve done the following:

    1. Gotten a copy of my most recent backup of the 2 TB internal disk from Backblaze on a 4TB spinny disk. (Costs me the price of the spinny disk, but worth it.)
    2. Copied the failing spinny disk copy of my old backups to an external SSD. (In hindsight, it should have gone to the empty space on the spinny external; I may do that later).
    3. Started walking through the SSD copy of the old files to clear space on the SSD for the files I want from the Backblaze spinny disk.

    The actual meat of this post

    So fine, I’m cleaning up the SSD. The actual thing I want to note here is that I have a collection of ebooks on that external that I want to file onto a folder in Dropbox. Problem is that a lot of them are probably already there, and the drag, get the duplicate dialogue, dismiss it, trash the file process is tiresome on the hands. I discovered a significantly faster way, and I’m noting it here for anyone else who might be doing something similar.

    1. Open the source folder (for me, that’s the “books” folder on the SSD) and the destination (that’s a categorized and subfoldered “Books” folder on Dropbox).
    2. For each file in the source folder, use the Finder search field in the Dropbox window, limiting the search to just the “Books” folder on Dropbox, and start entering the name of the source book.
    3. If the book is there on Dropbox, you’ll find it — and if there a duplicates, you can clean up the duplicates right from the search results.
    4. If it’s not there then it can be dragged over to the appropriate folder in “Books” on Dropbox after clearing the search field.
    5. In either case, the book is now either found or filed, and can be removed from the source folder.

    This is way faster and easier on the hands than dragging and dropping the books one at a time.

  • Via Medium: A step-by-step intro to Go concurrency

    I recent wrote a blog post on the Zip tech blog about Go concurrency; it’s mostly an intro to how channels and select both work, and how to use them effectively.

  • Considering the Cloud

    After the LastPass revelations and reading Jason Scott’s FUCK THE CLOUD essay today, I started considering what I should be looking at in terms of data security this year.

    Not as “can this data be stolen”, but as “can this data be lost irretrievably — and how bad would it be if it was?”.

    I have already lost access to my Twitter account, but I don’t think there’s much there that I’d care about if I never saw it again.

    I still have the EMUSIC-L archives, even though the ibiblio site has been broken for years. They are incomplete; we lost some of the really good stuff, including Mike’s hot-off-the-experience posts about the first Team Metlay gathering. Still, okay.

    My VFXsd sequences and patches are backed up on slowly-deteriorating diskettes, and it’s only a matter of time before those go. I think I have sysex dumps of all of them; I can replace the diskette drive with a USB one, but the SD-1 is getting long in the tooth, and I’m not sure I really mind if the various didn’t-quite-ever-amount-to-anything sequences are lost before I record them.

    Photos. I have several dozen photo libraries in various states of cleaned-upness, and that is a project I should devote some time to actually catching up on, even if it’s simply to pull out the good ones and let whatever happens to the rest, happen.

    Facebook does allow you to dump everything off, and it’s probably time to grab another archive.

    Most of my music is up on the Internet Archive, which is likely to outlast me, and that’s OK. Should consider packaging more of the tracks on Soundcloud into albums.

    I’ve lost all of my archived data from the mainframe era, and I’m a bit sad about that; there was some really elegant stuff in there — elegant for OS/360 and MVS, I guess…

    I’ve shrunk my physical memorabilia footprint a lot; I have a few things I’d hate to lose, like my board from the 360/95 (did lose my mass store carts and my original FE manual somewhere along the way) and my pocket trumpet, but not as much as I thought before.

    So I think my work for this year will start with finishing up the cleanup of both of our LastPass vaults — that’s mostly done at this point, but making sure we both have a clean copy is a chore — and then finding a way to compile and then deduplicate all those photo libraries (and separate my photos from Shymala’s — we did and still do tend to take shots with each other’s equipment and then forget to split them up).

    I anticipate that job will take quite some time.

    Once that’s done, I’ll come back to the various places my music is stored and get everything out on a release on Bandcamp and the Archive, which will make it available and as safe as I can make it.

    I’m backing up my personal laptop with BackBlaze, which is probably safety enough for most of my data. Will need to review though and make sure it’s all getting backed up. Possibly spending a little to save the various backup disks in BackBlaze is a good idea as well…

    I’ll revisit this over the year, but writing about it helps clarify my thinking some. Back to the passwords.

  • Too long since I contributed to Perl

    I’ve put in two documentation PR’s; funnily enough, I’ve changed email addresses, so now the infrastructure has forgotten that I wrote all the internal comments in the debugger, and I have to wait for someone to trigger the acceptance process.

    Should have done them earlier in the month…

  • I Only Wanted to Use My Time Capsule…

    I Only Wanted to Use My Time Capsule…

    A while back, I disconnected my Ethernet-connected Time Capsule because it was no longer working at all well for Time Machine backups. Somewhere in the update March of Progress, Time Machine became very sensitive to network drops. It may have been that way all the time, but we now have a lot more people with networks (I count 25 right now, as opposed to maybe 10 when I first moved here), and I think there’s simply more interference that Time Machine simply isn’t able to handle.

    I have found that regular mass-storage seems to work okay — I have an AirPort Extreme with an external 2 TB disk attached, and that seems to work fine as an external backup and organization disk.

    So I figured, why not switch the Time Capsule over to just being a big dumb network filestore, and not try to use it for Time Machine anymore? And it was kind of in the way when it was hard-wired, so setting it up like the Extreme should be fine.

    It was not fine.

    I was able to hard-reset it okay, but the current AirPort Utility (both on the Mac and the iPad) would not attach it to a non-Apple network. It was simply no go. After a lot of thrashing around, I found that AirPort Utility 5.6.1 should be able to fix this, but I couldn’t get it to run on my Catalina machine (I didn’t even bother to try on Big Sur). I did dig out my 2008 MacBook Air running El Capitan; surely this would do it!

    No, it didn’t. El Cap did not want to run it. I finally found BristleConeIT’s launcher utility for 5.6, and was able to get it to run on El Cap. Unfortunately, the straightforward “extend the network” (“join the network” was oddly not there) wasn’t available. I gave up and tried configuring it with no network, figuring I’d try later to fix it.

    This was the key to success: AirPort Utility diagnosed the settings as bad, and then led me through fixing them — and the fix process allowed me to join whatever network I wanted! I pointed it to my (non-Apple) Xfinity router, and said go. It restarted, and when I went to “Network” in the Finder, there it was!

    I launched the current AirPort Utility, which allowed me to access it and erase the disk. I chose to zero it out, and I’m waiting for that to finish, but so far, it seems like it worked.