Blog

  • The Harp of New Albion’s Tuning for Logic

    The Disquiet Junto is doing an alternate tunings prompt for week 0440 (very apropos!).

    I’ve done several pieces before using Balinese slendro and pelog tuning, most notably Pemungkah, for which this site is named. I wanted to do something different this time, using Terry Riley’s tuning from The Harp of New Albion, using Logic Pro’s project tuning option.

    The original version was a retuning of a Bosedorfer grand to a modified 5-limit tuning:

    However, Logic’s tuning feature needs two things to use a tuning with it:

    • Logic’s  tuning needs to be based on C, not C#
    • The tuning has to be expressed as cents of detuning from the equal-tempered equivalent note.

    This leads one to have to do quite a number of calculations to put this in a format that Logic will like.

    (more…)

  • Life in the fast lane / Surely makes you lose your mind

    I came back to the Radiospiral iOS app after some time away (we’re trying to dope out what’s going on with metadata from various broadcast setups appearing in the wrong positions on the “now playing” screen, and we need a new beta with the test streams enabled to try things), only to discover that Fastlane had gotten broken in a very unintuituve manner. Whenever I tried to use it, it took a crack at building things, then told me I needed to update the snapshotting Swift file.

    Okay, so I do that, and the error persists. Tried a half-dozen suggestions from Stack Overflow. Error persists. I realized I was going to need to do some major surgery and eliminate all the variables if I was going to be able to make this work.

    What finally fixed it was cleaning up multiple Ruby installs and getting down to just one known location, and then using Bundler to manage the Fastlane dependencies. The actual steps were:

    1. removing rvm
    2. removing rbenv
    3. brew install ruby to get one known Ruby install
    4. making the Homebrew Ruby my default ( export PATH=/usr/local/Cellar/ruby/2.7.0/bin:$PATH)
    5. rm -rf fastlane to clear out any assumptions
    6. rm Gemfile* to clean up any assumptions by the current, broken Fastlane
    7. bundle install fastlane (not gem install!) to get a clean one and limit the install to just my project
    8. bundle exec fastlane init to get things set up again

    After all that, fastlane was back to working, albeit only via bundle exec, which in hindsight is actually smarter.

    The actual amount of time spent trying to fix it before giving up and removing every Ruby in existence was ~2 hours, so take my advice and make sure you are absolutely sure which Ruby you are running, and don’t install fastlane into your Ruby install; use bundler. Trying to fix it with things going who knows where…well, there’s always an applicable xkcd.

    You are in a maze of Python installations, all different

  • Broken iframes and HTML::TreeBuilder

    We had a situation last week where someone had entered a broken <iframe> tag in a job description and our cleanup code didn’t properly remove it. This caused the text after the <iframe> to render as escaped HTML.

    We needed to prefilter the HTML and just remove the <iframe>s. The most difficult part of this was figuring out what HTML::TreeBuilder was emitting and what I needed to do with it to do the cleanup. It was obvious that this would have to be recursive, since HTML is recursive (there could be nested, or multiple uncosed iframes!) and several tries at it failed until I finally dumped out the data structure in the debugger and spotted that HTML::TreeBuilder was adding “implicit” nodes. These essentially help it do bookkeeping, but don’t contain anything that has to be re-examined to properly do the cleanup. Worse, the first node contains all th text for the current level, so recursing on them was leading me off into infinite depths, as I kept looking for iframes in the content of the leftmost node, finding them, and uselessly recursing again on the same HTML.

    The other interesting twist is that once I dropped the implicit nodes with a grep, I still needed to handle the HTML in the non-implicit nodes two different ways: if it had one or more iframe tags, then I needed to use the content method to take the node apart and process the pieces. There might be one or more non-iframes there, which end up getting returned untouched via as_HTML. If there are iframes, the recursion un-nests them and lets us clean up individual subtrees.

    Lastly, any text returned from content comes back as an array of strings, so I needed to check for that case and recurse on all the items in the array to be sure I’ve filtered everything properly. My initial case checks for the trivial “no input so no output”, and “not a reference” to handle the starting string.

    We do end up doing multiple invocations of HTML::TreeBuilder on the text as we recurse, but we don’t recurse at all unless there’s an iframe, and it’s unusual to have more than one.

    Here’s the code:

    +sub _filter_iframe_content {
      my($input) = @_;
      return '' unless $input;
    
      my $root;
      # We've received a string. Build the tree.
      if (!ref $input) {
        # Build a tree to process recursively.
        $root = HTML::TreeBuilder->new_from_content($input);
        # There are no iframe tags, so we're done with this segment of the HTML.
        return $input unless $root->look_down(_tag=>'iframe');
      } elsif (ref $input eq 'ARRAY') {
        # We got multiple strings from a content call; handle each one in order, and
        # return them, concatenated, to finish them up.
        return join '', map { _filter_iframe_content($_) } @$input;
      } else {
        # The input was a node, so make that the root of the (sub)tree we're processing.
        $root = $input;
      }
    
      # The 'implicit' nodes contain the wrapping HTML created by
      # TreeBuilder. Discard that.
      my @descendants = grep { ! $_->implicit } $root->descendants;
    
      # If there is not an iframe below the content of the node, return
      # it as HTML. Else recurse on the content to filter it.
      my @results;
      for my $node (@descendants) {
        # Is there an iframe in here?
        my $tree = HTML::TreeBuilder->new_from_content($node->as_HTML);
        if ($tree->look_down(_tag=>'iframe')) {
          # Yes. Recurse on the node, taking it apart.
          push @results, _filter_iframe_content($node->content);
        } else {
          # No, just return the whole thing as HTML, and we're done with this subtree.
          push @results, $node->as_HTML;
        }
      }
      return join '', @results;
    }
    
  • Fixing a commit in the middle of a set

    Another tip for those who’ve needed to do this: let’s say you’ve created a feature branch and are adding tests to the code, and you realize that one of your tests is incorrect several commits further on. What do you do?

    If this is a plain old feature branch, you can just make the fixup commit and have two commits for that test. This is perfectly fine.

    If however, you’re constructing a series of commits to be cherry-picked later, it’s better to have all the related changes together.

    You can do this by doing a git log and capturing the output in order back to the incomplete commit. Save that output, then git reset --hard oldcommit.

    The incomplete commit is now the current one, so you can make any fixes you need, git add them, and then git commit --amend to include them in the current (formerly incomplete) commit.

    Now go back to the log output, and find the commit just after the current one; record that, and then record the old HEAD commit. git cherry-pick nextcommit..oldhead will then reapply all of the commits following the one you just repaired, and your branch will be back where it was, with the one incorrect commit repaired.

  • Avoiding gigantic changesets when refactoring

    This may seem obvious to many folks but for those who haven’t had to do something like this before, but I think it’ll be very illuminating to those who haven’t.

    The starting point and a naive solution

    I’m currently working on a cleanup in our codebase at ZipRecruiter, where we want to remove dependencies shared between two parts of the codebase into a common area. The part I’m working on now is well-defined, but touches a very large number of modules throughout the codebase.

    If I chose a naive way of doing the port, I’d do the the following:

    1. Port the functions to be moved to a new module in the common area and ensure they’re all under test.
    2. Go through the rest of the codebase, find all references to the ported functions, and update the modules to use the new module instead of the old.
    3. Remove the functions from the old module now that all the other code has been ported.

    If the functions in the old module aren’t used much, then step 2 is fine as is, but this is a utility module that’s used in a lot of modules. This makes step 2 a gigantic risk point because the changeset for a large number of modules essentially has to be done all at once, whether we do it as one big commit or a lot of small ones that still have to be applied at the same time.

    Doing these all at once is going to be a code review and QA nightmare, so this isn’t going to fly.  Unfortunately, I started on the port and didn’t come to this understanding until I’d gotten pretty far into the changes. I needed to save the work I’d done, but reuse it in a way that was safe and not an unbearable load for my reviewers and QA.

    (more…)

  • It’s WordPress Scramble Week!

    The first sign of trouble is Google telling me that I’ve got multiple URLs going to the same page. That’s weird. How could that be happening?

    So I go to my site. And I get a 500 error. All the links get 500 errors.

    Uh oh.

    Okay, okay, I know what causes this: radiospiral.net broke this way last week – Jetpack will happily update itself to a revision that isn’t supported by PHP 5.6 without checking (it needs PHP 7 at least once it upgrades itself).

    So I go to my Hostgator CPanel to upgrade PHP.  Cool – I can upgrade it on all my sites with one click! I make PHP 7 the default, and check my site. Yep, all’s okay now. Job well done!

    Hang on a second – shymaladasonphotography.com uses a custom plugin too and is hosted under this account, better check – and it’s rendering a PHP error.

    AWESOME.

    Switch PHP back to 5.6, log in to the photo site. Yeah, that was it. All right, I’ll upgrade the plugin, and no, I won’t, because they’ve dropped support for the version I purchased! All I need to do is upgrade to the Plus version…and at this point I decide that I need to do this myself.

    So I go find a new theme, and install it. Now I need to reconstruct all the custom galleries. Go figure out where WordPress put the photos that were uploaded, since they’re not showing up in the media library as they should. Since they’re not there, I’ll have to get them there to be able to use them in standard widgets.

    I turn on SSH for my site, download all the photos, edit the gallery pages, delete the old gallery widget, add a new image carousel widget, upload the photos again, rebuild the carousels, and set PHP back to 7.0 to unbreak my other sites again.

    Photo site works, my site works, I think I’m done, and this has eaten an afternoon.

    Considering strongly just coding everything by hand in HTML at this point.

  • App Store Connect usability issues

    Allow me to be the Nth person to complain about App Store Connect’s lack of transparency, I’m currently working on an app for radiospiral.net’s net radio station, and I’m doing my proper dilligence by getting it beta tested by internal testers before pushing it to the App Store. I’m using TestFlight to keep it as simple as possible (and because fastlane seems to work well with that setup).

    I managed to get two testers in play, but I was trying to add a third today and I could not get the third person to show up as an internal tester because I kept missing a step. Here’s how it went, with my mental model in brackets:

    • Go to the users and groups page and add the new user. [okay, the new user’s available now].
    • Add them to the same groups as the other tester who I got working. [right, all set up the same…]
    • Added the app explicitly to the tester. […and they’ve got the app]
    • Mail went out to the new tester. [cool, the site thinks they should be a tester] [WRONG]
    • Tester installs Testflight and taps the link on their device. Nothing appreciable happens. [Did I set them up wrong?]
    • Delete the user, add them again. [I’ll set them up again and double-check…yes, they match]
    • They tap again. Still nothing. [what? but…]
    • Go over to the Testflight tab and look at the list of testers. Still not there. [I added them. why are they not there?] [also wrong]

    Much Googling and poking about got me nothing at all. Why is the user I added as an internal tester not there? They should be in the list.

    I went back to the page and this time I saw the little blue plus in a circle. I have to add them here too! Clicked the +, and the new user was there, waiting to be added to the internal testers.

    Sigh.

    So now I have blogged this so I can remember the process, and hopefully someone else who’s flailing around trying to figure out why internal testers aren’t showing up on the testers list will find this.

  • Scraping Patchstorage

    I lost an important VCVRack patch a couple days before Mountain Skies 2019. It was based on a patch I’d gotten from patchstorage.com, but I couldn’t remember which patch it was. I tried paging through the patches on the infinite scroll, but it wasn’t helping me much. I knew the patch had Clocked and the Impromptu 16-step sequencer, but I couldn’t remember anything else about it after seriously altering it for my needs.

    I decided the only option was going to have to be automated if I was going to find the base patch again in time to recreate my performance patch. I hammered out the following short Perl script to download the patches:

    use strict;
    use warnings;
    use WWW::Mechanize;
    use WWW::Mechanize::TreeBuilder;
    
    $|++;
    
    my $base_url = "https://patchstorage.com/platform/vcv-rack/page/";
    my $mech = WWW::Mechanize->new(autocheck=>0);
    WWW::Mechanize::TreeBuilder->meta->apply($mech);
    use constant SLEEP_TIME => 2;
    
    my $seq = 1;
    my $working = 1;
    while ($working) {
      print "page $seq\n";
      $mech->get($base_url.$seq);
      sleep(SLEEP_TIME);
      my @patch_pages = $mech->look_down('_tag', 'a');
      my @patch_links = grep {
        defined $_ and
        !m[/upload\-a\-patch\/] and
        !m[/login/] and
        !m[/new\-tutorial/] and
        !m[/explore/] and
        !m[/registration/] and
        !m[/new\-question/] and
        !m[/explore/] and
        !m[/platform/] and
        !m[/tag/] and
        !m[/author/] and
        !m[/wp\-content/] and
        !m[/category/] and
        !/\#$/ and
        !/\#respond/ and
        !/\#comments/ and
        !/mailto:/ and
        !/\/privacy\-policy/ and
        !/discord/ and
        !/https:\/\/vcvrack/ and
        !/javascript:/ and
        !/action=lostpassword/ and
        !/patchstorage.com\/$/ and
        ! $_ eq ''} map {$_->attr('href')} @patch_pages;
        my %links;
        @links{@patch_links} = ();
        @patch_links = keys %links;
        print scalar @patch_links, " links found\n";
        for my $link (@patch_links) {
          next unless $link;
          print $link;
          my @parts = split /\//, $link;
          my $patch_name = $parts[-1];
          if (-f "/Users/jmcmahon/Downloads/$patch_name") {
            print "...skipped\n";
            next;
          }
          print "\n";
          $mech->get($link);
          sleep(SLEEP_TIME);
          my @patches = $mech->look_down('id', "DownloadPatch");
          for my $patch (@patches) {
            my $p_link = $patch->attr('href');
            next unless $p_link;
            print "$patch_name...";
            $mech->get($patch->attr('href'));
            sleep(SLEEP_TIME);
            open my $fh, ">", "/Users/jmcmahon/Downloads/$patch_name" or die "Can't open $patch_name: $!";
            print $fh $mech->content;
            close $fh;
            print "saved\n";
          }
        }
        $seq++;
     }
    

    Notable items here:

    • The infinite scroll is actually a chunk of Javascript wrapped around a standard WordPress page setup, so I can “page” back through the patches for Rack by incrementing the page number and pulling off the links to the actual posts with the patches in them.
    • That giant grep and map cleans up the links I get off the individual pages to just the ones that are actually links to patches.
    • I have a couple checks in there for “have I already downloaded this?” to allow me to restart the script if it dies partway through the process.
    • The script kills itself off once it gets a page with no links on it. I haven’t actually gotten that far yet, but I think it should work.

    Patchstorage folks: I apologize for scraping the site, but this is for my own use only; I”m not republishing. If I weren’t desperate to retrieve the patch for Friday I would have just left it alone.

  • A Variation on “Clouds of Europa”

    I’m still learning the ins and outs of VCVRack; there are so many interesting modules available, and so many different possible directions to go in!

    I’m starting to lean toward something in the middle ground between Berlin School sequencing and complete wacked-out crazy, searching for an ambient location somewhere in that space. Jim Frye posted a video of his beautiful “Clouds of Europa” patch on the VCVRack forums yesterday, and I transcribed it from the video to see how it works. After some experimentation, I tweaked the settings of the macro oscillators and added a fourth one, put envelopes on them to add some more air, added some LFO action to vary the sound a bit, and lengthened the delay time to add some more texture to the bass.

    I will probably revisit this patch and change over the Caudal to the Turing Machine and see what I can do with that as the source of randomness to feed Riemann, but I’m very happy with the result so far.

  • A HOWTO for Test::Mock::LWP

    I was clearing out my CPAN RT queue today, and found a question in the tickets for Test::Mock::LWP from dcantrell:

    It’s not at all clear how to use this module. I have a module which (partly) wraps around LWP::UserAgent which I use to fetch data which my module then manipulates. Obviously I need to test that my module handles webby errors correctly, for instance that it correctly detects when the remote sites don’t respond; and I need to be able to feed known data to my module so I can test that it does those manipulations correctly.

    Test::Mock::LWP is the obvious candidate for faking up LWP::UserAgent, but I just can’t figure out how to use it. Please can you write a HOWTO and add it to the docs.

    I’m adding the HOWTO tonight, even though the question was asked 12 years ago (I really need to get to my RT queue more often). The module’s description as it stands is pretty opaque; this explanation should, I hope, make it much more clear.

    HOWTO use Test::Mock::LWP

    Test::Mock::LWP is designed to provide you a quick way to mock out LWP calls.

    Exported variables

    Test::Mock::LWP‘s interface is exposed via the variables it exports:

    • $Mock_ua – mocks LWP::USerAgent
    • $Mock_req / $Mock_request – mocks HTTP::Request
    • $Mock_resp / $Mock_response – mocks HTTP::Response
    • All of these are actually Test::MockObject objects, so you call mock() on them to change how they operate dynamically. Here’s an example.

      Let’s say you wanted the next response to an LWP call to return the content foo and an HTTP status code of 201. You’d do this:

       
      BEGIN {
        # Load the mock modules *first*.
        use Test::Mock::LWP::UserAgent;
        use Test::Mock::HTTP::Response;
        use Test::Mock::HTTP::Request;
      }
      
      # Load the modules you'll use to actually do LWP operations.
      # These will automatically be mocked for you.
      use LWP::UserAgent;
      use HTTP::Response;
      use HTTP::Request;
      
      # Now set up the response you want to get back.
      $Mock_resp->mock( content => sub { 'foo' });
      $Mock_resp->mock( code    => sub { 201 });
      
      # Pretend we're making a request to a site.
      for (1..2) {
        my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
        my $agent = LWP::UserAgent->new;
        my $res   = $agent->simple_request($req);
      }
      # The values you added to the mock are now there.
      printf("The site returned %d %s\n", $res->code, $res->content);
      

      This will print

      201 foo
      201 foo
      

      Getting more than one value out of the mocks: repeated re-mocks

      Note that the values are constrained to what you’ve sent to the mocks. The mock here will simply keep returning 201 and foo for as many times as you call it. You’ll need to re-mock the content and code methods
      each time you want to change them.

      my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
      my $agent = LWP::UserAgent->new;
      
      $Mock_resp->mock( content => sub { 'foo' });
      $Mock_resp->mock( code    => sub { 201 });
      my $res   = $agent->simple_request($req);
      
      printf("The site returned %d %s\n", $res->code, $res->content);
      # 201 foo
      		
      $Mock_resp->mock( content => sub { 'bar' });
      $Mock_resp->mock( code    => sub { 400 });
      my $res   = $agent->simple_request($req);
      
      printf("The site returned %d %s\n", $res->code, $res->content);
      # 400 bar	
      

      Moving the logic into the mocks

      If you have a fixed sequence of items to return, just add them all to the mocks and have the mocks step through them. Here’s an example where we hand off two lists of values to the mocks:

      use strict;
      BEGIN {
        # Load the mock modules *first*.
        use Test::Mock::LWP::UserAgent;
        use Test::Mock::HTTP::Response;
        use Test::Mock::HTTP::Request;
      }
      
      # Load the modules you'll use to actually do LWP operations.
      # These will automatically be mocked for you.
      use LWP::UserAgent;
      use HTTP::Response;
      use HTTP::Request;
      
      my @contents = qw(foo bar baz);
      my @codes    = qw(404 400 200);
      
      # initialize counter.
      my $code_counter = 2;
      my $content_counter = 2;
      
      my $content_sub = sub {
        $content_counter += 1;
        $content_counter %= 3;
        $contents[$content_counter];
      };
      
      my $code_sub = sub {
        $code_counter += 1;
        $code_counter %= 3;
        return $codes[$code_counter];
      };
          
      $Mock_resp->mock(content => $content_sub);
      $Mock_resp->mock(code    => $code_sub);
          
      my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
      my $agent = LWP::UserAgent->new;
          
      for (0..5) {
        my $res   = $agent->simple_request($req);
        printf("The site returned %d %s\n", $res->code, $res->content);
      }
      

      This will print

          The site returned 404 foo
          The site returned 400 bar
          The site returned 200 baz
          The site returned 404 foo
          The site returned 400 bar
          The site returned 200 baz
      

      Remember: the key is make sure that the mock is ready to return the next item when you make the next request to the user agent.