Programming

You are currently browsing the archive for the Programming category.

The next task is getting App::WebWebXNG to build, let alone pass any tests.

First up: I’ve changed the name of the page archive library, so I need to change the use statement, and fix up the new() call (making it direct invocation syntax while I’m at it).

The defined %hash syntax is no longer valid, so we need to fix that. The usages we have in this script are really “is there anything in this hash” checks, so keys will work to fix these.

it uses a lot of globals. This results from repackaging a Perl 4 script and making as few changes as possible to get it running. The vast majority are defined in webwebx.pl, but there are a couple – no, sorry, a bunch – that come from the CGI script. We need to add a use vars for these. Found two on the first run, then after the defined %hash issues were fixed, there were a bunch more. Adding them as we go.

“Replacement list is longer than search list”. There’s an interesting one! This is a tr that should be an s//g.

Okay,. load test passes! It doesn’t actually do anything, but that much is working. Good.

Let’s go look at the CGI script and see what it’s doing to initialize the globals we had to add; those are going to have to be set up somehow (for now, I think I’ll just add a setup_kludge function to do it). The variables we’re setting up here are mostly related to knowing where the script is hosted so that the internal link URLs are right, the location of the static files, and the location that stores all the data. Mojolicious should allow us to dispense with a lot of this and build the URLs as relative rather than absolute.

Now for some serious cleaning up. Let’s set up Perl::Tidy and Perl::Critic. Perl::Tidy is pretty critical, because the indentation is all over the place, and it’s hard to read the code. And Perl::Critic is just good insurance. I’m using policies similar to those we used at Zip.

Running those found a lot of things that needed neatening up…and several outright bugs!

  1. App::WebWebXNG had one perlcritic issue, a my with a trailing conditional. Not too bad for 25-year-old code.
  2. However, PageArchive::RCS had a lot of things to fix up.
    1. No use warnings. Okay, that one’s pretty easy.
    2. Tried to set the Rewound attribute for a directory; the code was after a return so it couldn’t be reached. When it was moved to be reachable, it was using a variable that didn’t exist! Needed to be using the instance variable for the object.
    3.  All of the open() calls used the old two-argument syntax. It’s still supported but it’s lousy practice, so I edited all of the open() calls in App::WebWebXNG and in PageArchive::RCS.
    4. There were several places where an if(my $foo... referenced $foo outside of the block. This changed sometime between Perl 5.6 and 5.38 (which I’m testing this with), so all of those had to be moved outside of the block.
    5. Finally, one method in PageArchive::RCS tried to use $self without creating it in scope. This would result in never getting error messages back, and may have hidden other bugs. We’ll see.

We’re back to all tests passing, perlcritic happy, and perltidy happy.  Created the repo on GitHub, pushed the work to date. Hang on, need to add a WIP marker…okay, got it.

A good morning’s work!

Hoo boy.

I’ve put the basics in place now: there’s an App::WebWebXNG.pm nodule, and I’ve moved the page management and file locking modules into /lib. The load tests for the existing library modules pass, but there aren’t any functional tests yet.

Now, on to the old core script, webwebx.pl.

I’ve imported it as close to as-is as possible into App::WebWebX.pm, and added a main if not caller() to run the old script as the main program.

This script was just barely converted from Perl 4. There’s a giant pile of globals, and the majority of the “database” stuff it does is in DBM (if anyone still remembers that). I don’t even know if DBM still exists in more modern Perls!

All of the HTML generation is from interpolated print statements. There’s no CSS (browsers didn’t even support such a thing at the time; it was Mosaic or nothing. Okay, maybe IE, but the number of Windows machines on base at GSFC that were being used by our user community was probably countable on one hand.).

This should be convertible to Mojo::Template relatively easily, which is good. And the command dispatch is driven off a hash of code references, so that should work fairly well too.

It’s not terrible, it’s just old. Well, off to see how much will work!

The past

Back in 1998 or so,  or long after I’d switched for system administrator to web developer, I stumbled across Ward Cunningham’s original WikiWiki. It was, at the time, a mind-blowing idea: a website that people could edit and extend themselves, without any programming at all. Simply sign in to the wiki, and start editing. Adding a specially-formatted word automatically generated a link to another page, either an existing one…or a brand new one, that you could start expanding on yourself.

I can’t say that I conceived of Wikipedia when I saw this, but I absolutely zeroed in on how we could use it for several problems we had:

  • We didn’t have a bug tracker/project tracker for our project. With a wiki, we could just have a page that linked to all of the features we were working on and the bugs we were fixing.
  • We didn’t have a formal release process at all, or much in the way of source control. We started using RCS and noting the version number(s) of files that fixed bugs. We still had to build up a canonical checkout of everything, but we at least had some tracking that way.
  • We really wanted (and needed) an easy way to build a reference manual for our users that was easy or them to browse and search, and easy for us to keep up to date.

We (okay, I) decided to try a wiki. The original WikiWiki didn’t have a number of features we really felt like we needed for this to work: no authorized users and no access control being the big issues. I found WebWeb, original written by (I will have to look at the WebWebX source!), which had part of, but not all of what I needed, and with their permission, I created an extended version, rather unimaginatively called WebWebX.

 

The present

RadioSpiral has a lot of stuff that we need to have documented: how to connect to the streams, configs, where Spud lives and how to reboot him, policies, etc., and it’d be nice to have all that in a wiki instead of in documents (our last update of our docs was 5 years ago!). I remembered that we’d had a private Notion instance at ZipRecruiter — it wasn’t great, but it was usable, and private. So I signed up for Notion…and discovered for a mere $720 a year, I could have the level of support that included a private wiki.

Given that RadioSpiral’s income is in the red at all times — it’s 100% a labor of love, and a place for us to have fun while playing good music — that was just not a tenable solution. I didn’t want to run the old Zip wiki either — it was written in Haskell, and I didn’t feel like learning a whole new programming paradigm just to get a private wiki.

The I remembered, well, I have the old WebWebX source out there, and it did have access control. Maybe I could get it running again, and modernize it in the process. I’ve pulled the source from ibiblio and started working on the conversion. First things first, I’ve installed Dist::Zilla so I can build it out in some kind of reasonable fashion, and I’ve decided to base the whole thing on Mojolicious to try to make it as self-contained as possible.

My goal is a private wiki that can be deployed with a dead minimum of effort. Which will probably entail a lot of effort to write and fix up, but that’s time better spent than trying to find a free alternative somewhere that I’ll have to accept compromises in, or yet another paid service that I’ll have to pay for myself.

So far, I’ve created the initial README.md, initialized Dist::Zilla in the new App::WebWebXNG repo, and imported the old code into the repo to start work. I’m choosing to implement the main program as a modulino, to make it easy to test (did I mention that the old code has exactly zero tests?).

Updates to follow!

The Twenty Seventeen WordPress theme is a beaut. You can set up your home page to scroll any number of fixed pages, each with its own header image, each appearing as you scroll down the page. For an art site, like shymaladasonart.com, this is gorgeous and lets you show off sample images.

Problem is, on the Pad the images look horrendous because the CSS makes them weirdly zoom in at a huge magnification, and the previously-lovely effect becomes a mess.

After a lot of poking about, this bug’s apparently been an issue for quite a while, and obviously still isn’t fixed. Fortunately, there is a workaround. You need to log in to wp-admin, select Appearance > Customize > Additional CSS, and add this:

@media screen and (min-device-width:768px) and (max-device-width: 1024px) {
    .background-fixed .panel-image { 
        background-attachment: unset;
        width: 100%;
    }
}

This will check specifically for the iPad, and turn off the pretty effect that scrolls the page content over the header image. It’s not quite as cool on the iPad, but at least now it doesn’t look bad.

obliquebot returns

Some time back, when beepboop.com was still around, I wrote a little Slack bot that listened for “oblique” or “strategy” in the channels it had been invited to, and popped out one of Eno’s Oblique Strategies when it heard its keywords or was addressed directly.

It worked fine up until the day that BeepBoop announced that they were going away, and eventually obliquebot stopped working.

This month, I decided that I would stop ignoring the “you have a security issue in your code” notifications from GitHub, and try catching obliquebot up with the new version of the SLAPP library that I’d used to get Spud, the RadioSpiral.net “who’s on and what’s playing” robot back online.

I went through all the package upgrades and then copied the code from Spud over to the obliquebot checkout. The code was substantially the same; both are bots that listen to channels and respond, without doing any complex interaction. I needed to add the code to load the strategies from a YAML file and to select and print one, but the code was mostly the same.

I also needed to update the authentication page to show the obliquebot icon instead of the RadioSpiral one, and to set the OAuth callback link to the one supplied by Slack.

Once I had all that in place, I spent a good two or three hours trying to figure out why I could build the code on Heroku, but not get it to run. I finally figured out that I had physically turned off the dyno, and that it wasn’t going to do anything until I tuned it back on again.

obliquebot is now running again at RadioSpiral and the Disquiet Junto Slack, and I’ve updated the README at the code’s GitHub page to outline all the steps one needs to take it and build one’s own simple request-response bot.

The Disquiet Junto is doing an alternate tunings prompt for week 0440 (very apropos!).

I’ve done several pieces before using Balinese slendro and pelog tuning, most notably Pemungkah, for which this site is named. I wanted to do something different this time, using Terry Riley’s tuning from The Harp of New Albion, using Logic Pro’s project tuning option.

The original version was a retuning of a Bosedorfer grand to a modified 5-limit tuning:

However, Logic’s tuning feature needs two things to use a tuning with it:

  • Logic’s  tuning needs to be based on C, not C#
  • The tuning has to be expressed as cents of detuning from the equal-tempered equivalent note.

This leads one to have to do quite a number of calculations to put this in a format that Logic will like.

Read the rest of this entry »

I came back to the Radiospiral iOS app after some time away (we’re trying to dope out what’s going on with metadata from various broadcast setups appearing in the wrong positions on the “now playing” screen, and we need a new beta with the test streams enabled to try things), only to discover that Fastlane had gotten broken in a very unintuituve manner. Whenever I tried to use it, it took a crack at building things, then told me I needed to update the snapshotting Swift file.

Okay, so I do that, and the error persists. Tried a half-dozen suggestions from Stack Overflow. Error persists. I realized I was going to need to do some major surgery and eliminate all the variables if I was going to be able to make this work.

What finally fixed it was cleaning up multiple Ruby installs and getting down to just one known location, and then using Bundler to manage the Fastlane dependencies. The actual steps were:

  1. removing rvm
  2. removing rbenv
  3. brew install ruby to get one known Ruby install
  4. making the Homebrew Ruby my default ( export PATH=/usr/local/Cellar/ruby/2.7.0/bin:$PATH)
  5. rm -rf fastlane to clear out any assumptions
  6. rm Gemfile* to clean up any assumptions by the current, broken Fastlane
  7. bundle install fastlane (not gem install!) to get a clean one and limit the install to just my project
  8. bundle exec fastlane init to get things set up again

After all that, fastlane was back to working, albeit only via bundle exec, which in hindsight is actually smarter.

The actual amount of time spent trying to fix it before giving up and removing every Ruby in existence was ~2 hours, so take my advice and make sure you are absolutely sure which Ruby you are running, and don’t install fastlane into your Ruby install; use bundler. Trying to fix it with things going who knows where…well, there’s always an applicable xkcd.

You are in a maze of Python installations, all different

We had a situation last week where someone had entered a broken <iframe> tag in a job description and our cleanup code didn’t properly remove it. This caused the text after the <iframe> to render as escaped HTML.

We needed to prefilter the HTML and just remove the <iframe>s. The most difficult part of this was figuring out what HTML::TreeBuilder was emitting and what I needed to do with it to do the cleanup. It was obvious that this would have to be recursive, since HTML is recursive (there could be nested, or multiple uncosed iframes!) and several tries at it failed until I finally dumped out the data structure in the debugger and spotted that HTML::TreeBuilder was adding “implicit” nodes. These essentially help it do bookkeeping, but don’t contain anything that has to be re-examined to properly do the cleanup. Worse, the first node contains all th text for the current level, so recursing on them was leading me off into infinite depths, as I kept looking for iframes in the content of the leftmost node, finding them, and uselessly recursing again on the same HTML.

The other interesting twist is that once I dropped the implicit nodes with a grep, I still needed to handle the HTML in the non-implicit nodes two different ways: if it had one or more iframe tags, then I needed to use the content method to take the node apart and process the pieces. There might be one or more non-iframes there, which end up getting returned untouched via as_HTML. If there are iframes, the recursion un-nests them and lets us clean up individual subtrees.

Lastly, any text returned from content comes back as an array of strings, so I needed to check for that case and recurse on all the items in the array to be sure I’ve filtered everything properly. My initial case checks for the trivial “no input so no output”, and “not a reference” to handle the starting string.

We do end up doing multiple invocations of HTML::TreeBuilder on the text as we recurse, but we don’t recurse at all unless there’s an iframe, and it’s unusual to have more than one.

Here’s the code:

+sub _filter_iframe_content {
  my($input) = @_;
  return '' unless $input;

  my $root;
  # We've received a string. Build the tree.
  if (!ref $input) {
    # Build a tree to process recursively.
    $root = HTML::TreeBuilder->new_from_content($input);
    # There are no iframe tags, so we're done with this segment of the HTML.
    return $input unless $root->look_down(_tag=>'iframe');
  } elsif (ref $input eq 'ARRAY') {
    # We got multiple strings from a content call; handle each one in order, and
    # return them, concatenated, to finish them up.
    return join '', map { _filter_iframe_content($_) } @$input;
  } else {
    # The input was a node, so make that the root of the (sub)tree we're processing.
    $root = $input;
  }

  # The 'implicit' nodes contain the wrapping HTML created by
  # TreeBuilder. Discard that.
  my @descendants = grep { ! $_->implicit } $root->descendants;

  # If there is not an iframe below the content of the node, return
  # it as HTML. Else recurse on the content to filter it.
  my @results;
  for my $node (@descendants) {
    # Is there an iframe in here?
    my $tree = HTML::TreeBuilder->new_from_content($node->as_HTML);
    if ($tree->look_down(_tag=>'iframe')) {
      # Yes. Recurse on the node, taking it apart.
      push @results, _filter_iframe_content($node->content);
    } else {
      # No, just return the whole thing as HTML, and we're done with this subtree.
      push @results, $node->as_HTML;
    }
  }
  return join '', @results;
}

The first sign of trouble is Google telling me that I’ve got multiple URLs going to the same page. That’s weird. How could that be happening?

So I go to my site. And I get a 500 error. All the links get 500 errors.

Uh oh.

Okay, okay, I know what causes this: radiospiral.net broke this way last week – Jetpack will happily update itself to a revision that isn’t supported by PHP 5.6 without checking (it needs PHP 7 at least once it upgrades itself).

So I go to my Hostgator CPanel to upgrade PHP.  Cool – I can upgrade it on all my sites with one click! I make PHP 7 the default, and check my site. Yep, all’s okay now. Job well done!

Hang on a second – shymaladasonphotography.com uses a custom plugin too and is hosted under this account, better check – and it’s rendering a PHP error.

AWESOME.

Switch PHP back to 5.6, log in to the photo site. Yeah, that was it. All right, I’ll upgrade the plugin, and no, I won’t, because they’ve dropped support for the version I purchased! All I need to do is upgrade to the Plus version…and at this point I decide that I need to do this myself.

So I go find a new theme, and install it. Now I need to reconstruct all the custom galleries. Go figure out where WordPress put the photos that were uploaded, since they’re not showing up in the media library as they should. Since they’re not there, I’ll have to get them there to be able to use them in standard widgets.

I turn on SSH for my site, download all the photos, edit the gallery pages, delete the old gallery widget, add a new image carousel widget, upload the photos again, rebuild the carousels, and set PHP back to 7.0 to unbreak my other sites again.

Photo site works, my site works, I think I’m done, and this has eaten an afternoon.

Considering strongly just coding everything by hand in HTML at this point.

Allow me to be the Nth person to complain about App Store Connect’s lack of transparency, I’m currently working on an app for radiospiral.net’s net radio station, and I’m doing my proper dilligence by getting it beta tested by internal testers before pushing it to the App Store. I’m using TestFlight to keep it as simple as possible (and because fastlane seems to work well with that setup).

I managed to get two testers in play, but I was trying to add a third today and I could not get the third person to show up as an internal tester because I kept missing a step. Here’s how it went, with my mental model in brackets:

  • Go to the users and groups page and add the new user. [okay, the new user’s available now].
  • Add them to the same groups as the other tester who I got working. [right, all set up the same…]
  • Added the app explicitly to the tester. […and they’ve got the app]
  • Mail went out to the new tester. [cool, the site thinks they should be a tester] [WRONG]
  • Tester installs Testflight and taps the link on their device. Nothing appreciable happens. [Did I set them up wrong?]
  • Delete the user, add them again. [I’ll set them up again and double-check…yes, they match]
  • They tap again. Still nothing. [what? but…]
  • Go over to the Testflight tab and look at the list of testers. Still not there. [I added them. why are they not there?] [also wrong]

Much Googling and poking about got me nothing at all. Why is the user I added as an internal tester not there? They should be in the list.

I went back to the page and this time I saw the little blue plus in a circle. I have to add them here too! Clicked the +, and the new user was there, waiting to be added to the internal testers.

Sigh.

So now I have blogged this so I can remember the process, and hopefully someone else who’s flailing around trying to figure out why internal testers aren’t showing up on the testers list will find this.

« Older entries § Newer entries »