Category: Programming

  • Scraping Patchstorage

    I lost an important VCVRack patch a couple days before Mountain Skies 2019. It was based on a patch I’d gotten from patchstorage.com, but I couldn’t remember which patch it was. I tried paging through the patches on the infinite scroll, but it wasn’t helping me much. I knew the patch had Clocked and the Impromptu 16-step sequencer, but I couldn’t remember anything else about it after seriously altering it for my needs.

    I decided the only option was going to have to be automated if I was going to find the base patch again in time to recreate my performance patch. I hammered out the following short Perl script to download the patches:

    use strict;
    use warnings;
    use WWW::Mechanize;
    use WWW::Mechanize::TreeBuilder;
    
    $|++;
    
    my $base_url = "https://patchstorage.com/platform/vcv-rack/page/";
    my $mech = WWW::Mechanize->new(autocheck=>0);
    WWW::Mechanize::TreeBuilder->meta->apply($mech);
    use constant SLEEP_TIME => 2;
    
    my $seq = 1;
    my $working = 1;
    while ($working) {
      print "page $seq\n";
      $mech->get($base_url.$seq);
      sleep(SLEEP_TIME);
      my @patch_pages = $mech->look_down('_tag', 'a');
      my @patch_links = grep {
        defined $_ and
        !m[/upload\-a\-patch\/] and
        !m[/login/] and
        !m[/new\-tutorial/] and
        !m[/explore/] and
        !m[/registration/] and
        !m[/new\-question/] and
        !m[/explore/] and
        !m[/platform/] and
        !m[/tag/] and
        !m[/author/] and
        !m[/wp\-content/] and
        !m[/category/] and
        !/\#$/ and
        !/\#respond/ and
        !/\#comments/ and
        !/mailto:/ and
        !/\/privacy\-policy/ and
        !/discord/ and
        !/https:\/\/vcvrack/ and
        !/javascript:/ and
        !/action=lostpassword/ and
        !/patchstorage.com\/$/ and
        ! $_ eq ''} map {$_->attr('href')} @patch_pages;
        my %links;
        @links{@patch_links} = ();
        @patch_links = keys %links;
        print scalar @patch_links, " links found\n";
        for my $link (@patch_links) {
          next unless $link;
          print $link;
          my @parts = split /\//, $link;
          my $patch_name = $parts[-1];
          if (-f "/Users/jmcmahon/Downloads/$patch_name") {
            print "...skipped\n";
            next;
          }
          print "\n";
          $mech->get($link);
          sleep(SLEEP_TIME);
          my @patches = $mech->look_down('id', "DownloadPatch");
          for my $patch (@patches) {
            my $p_link = $patch->attr('href');
            next unless $p_link;
            print "$patch_name...";
            $mech->get($patch->attr('href'));
            sleep(SLEEP_TIME);
            open my $fh, ">", "/Users/jmcmahon/Downloads/$patch_name" or die "Can't open $patch_name: $!";
            print $fh $mech->content;
            close $fh;
            print "saved\n";
          }
        }
        $seq++;
     }
    

    Notable items here:

    • The infinite scroll is actually a chunk of Javascript wrapped around a standard WordPress page setup, so I can “page” back through the patches for Rack by incrementing the page number and pulling off the links to the actual posts with the patches in them.
    • That giant grep and map cleans up the links I get off the individual pages to just the ones that are actually links to patches.
    • I have a couple checks in there for “have I already downloaded this?” to allow me to restart the script if it dies partway through the process.
    • The script kills itself off once it gets a page with no links on it. I haven’t actually gotten that far yet, but I think it should work.

    Patchstorage folks: I apologize for scraping the site, but this is for my own use only; I”m not republishing. If I weren’t desperate to retrieve the patch for Friday I would have just left it alone.

  • A HOWTO for Test::Mock::LWP

    I was clearing out my CPAN RT queue today, and found a question in the tickets for Test::Mock::LWP from dcantrell:

    It’s not at all clear how to use this module. I have a module which (partly) wraps around LWP::UserAgent which I use to fetch data which my module then manipulates. Obviously I need to test that my module handles webby errors correctly, for instance that it correctly detects when the remote sites don’t respond; and I need to be able to feed known data to my module so I can test that it does those manipulations correctly.

    Test::Mock::LWP is the obvious candidate for faking up LWP::UserAgent, but I just can’t figure out how to use it. Please can you write a HOWTO and add it to the docs.

    I’m adding the HOWTO tonight, even though the question was asked 12 years ago (I really need to get to my RT queue more often). The module’s description as it stands is pretty opaque; this explanation should, I hope, make it much more clear.

    HOWTO use Test::Mock::LWP

    Test::Mock::LWP is designed to provide you a quick way to mock out LWP calls.

    Exported variables

    Test::Mock::LWP‘s interface is exposed via the variables it exports:

    • $Mock_ua – mocks LWP::USerAgent
    • $Mock_req / $Mock_request – mocks HTTP::Request
    • $Mock_resp / $Mock_response – mocks HTTP::Response
    • All of these are actually Test::MockObject objects, so you call mock() on them to change how they operate dynamically. Here’s an example.

      Let’s say you wanted the next response to an LWP call to return the content foo and an HTTP status code of 201. You’d do this:

       
      BEGIN {
        # Load the mock modules *first*.
        use Test::Mock::LWP::UserAgent;
        use Test::Mock::HTTP::Response;
        use Test::Mock::HTTP::Request;
      }
      
      # Load the modules you'll use to actually do LWP operations.
      # These will automatically be mocked for you.
      use LWP::UserAgent;
      use HTTP::Response;
      use HTTP::Request;
      
      # Now set up the response you want to get back.
      $Mock_resp->mock( content => sub { 'foo' });
      $Mock_resp->mock( code    => sub { 201 });
      
      # Pretend we're making a request to a site.
      for (1..2) {
        my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
        my $agent = LWP::UserAgent->new;
        my $res   = $agent->simple_request($req);
      }
      # The values you added to the mock are now there.
      printf("The site returned %d %s\n", $res->code, $res->content);
      

      This will print

      201 foo
      201 foo
      

      Getting more than one value out of the mocks: repeated re-mocks

      Note that the values are constrained to what you’ve sent to the mocks. The mock here will simply keep returning 201 and foo for as many times as you call it. You’ll need to re-mock the content and code methods
      each time you want to change them.

      my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
      my $agent = LWP::UserAgent->new;
      
      $Mock_resp->mock( content => sub { 'foo' });
      $Mock_resp->mock( code    => sub { 201 });
      my $res   = $agent->simple_request($req);
      
      printf("The site returned %d %s\n", $res->code, $res->content);
      # 201 foo
      		
      $Mock_resp->mock( content => sub { 'bar' });
      $Mock_resp->mock( code    => sub { 400 });
      my $res   = $agent->simple_request($req);
      
      printf("The site returned %d %s\n", $res->code, $res->content);
      # 400 bar	
      

      Moving the logic into the mocks

      If you have a fixed sequence of items to return, just add them all to the mocks and have the mocks step through them. Here’s an example where we hand off two lists of values to the mocks:

      use strict;
      BEGIN {
        # Load the mock modules *first*.
        use Test::Mock::LWP::UserAgent;
        use Test::Mock::HTTP::Response;
        use Test::Mock::HTTP::Request;
      }
      
      # Load the modules you'll use to actually do LWP operations.
      # These will automatically be mocked for you.
      use LWP::UserAgent;
      use HTTP::Response;
      use HTTP::Request;
      
      my @contents = qw(foo bar baz);
      my @codes    = qw(404 400 200);
      
      # initialize counter.
      my $code_counter = 2;
      my $content_counter = 2;
      
      my $content_sub = sub {
        $content_counter += 1;
        $content_counter %= 3;
        $contents[$content_counter];
      };
      
      my $code_sub = sub {
        $code_counter += 1;
        $code_counter %= 3;
        return $codes[$code_counter];
      };
          
      $Mock_resp->mock(content => $content_sub);
      $Mock_resp->mock(code    => $code_sub);
          
      my $req   = HTTP::Request->new(GET => 'http://nevergoeshere.com');
      my $agent = LWP::UserAgent->new;
          
      for (0..5) {
        my $res   = $agent->simple_request($req);
        printf("The site returned %d %s\n", $res->code, $res->content);
      }
      

      This will print

          The site returned 404 foo
          The site returned 400 bar
          The site returned 200 baz
          The site returned 404 foo
          The site returned 400 bar
          The site returned 200 baz
      

      Remember: the key is make sure that the mock is ready to return the next item when you make the next request to the user agent.

  • Recovering my old Scape files

    My original iPad finally bit the dust in August, just before I could get a final good backup of it. Most of the stuff on it was already backed up elsewhere (GMail, Dropbox, iCloud), but Scape was the exception.

    Scape is (at least not yet) able to back up its files to the cloud, so there wasn’t anyplace else to restore from — except I had take advantage of the fact that under iOS5, the files in the app were still directly readable using Macroplant’s iExplorer, so I had actually grabbed all the raw Scape files and even the Scape internal resources. Sometime I’ll write up what I’ve figured out about Scape from those files…

    The Scape files themselves are just text files that tell Scape what to put on the screen and play, so the files themselves were no problem; they don’t include checksums or anything that would make them hard to work with.


    Version:0.20
    Mood:7
    Date:20121113025954
    Separation:0.50
    HarmonicComplexity:0.50
    Mystery:0.50
    Title:Scape 117
    Steam Factory,0.50,0.50,1.0000
    Spirit Sine Dry,0.23,0.31,3.1529
    Spirit Sine Dry,0.40,0.36,3.4062
    Spirit Sine Dry,0.64,0.19,3.9375
    Spirit Sine Dry,0.55,0.49,1.0065
    Spirit Sine Dry,0.26,0.67,3.5039
    Spirit Sine Dry,0.76,0.54,3.1211
    Spirit Sine Dry,0.49,0.79,3.8789
    Spirit Sine Dry,0.46,0.17,3.9766
    Spirit Sine Dry,0.85,0.27,2.0732
    Spirit Sine Dry,0.90,0.53,1.5154
    Spirit Sine Dry,0.66,0.72,3.6680
    Spirit Sine Dry,0.15,0.55,2.2527
    Spirit Sine Dry,0.11,0.80,1.9320
    Spirit Sine Dry,0.32,0.88,4.1289
    Spirit Sine Dry,0.18,0.14,3.2779
    Spirit Sine Dry,0.81,0.11,3.0752
    Spirit Sine Dry,0.49,0.56,1.7528
    Spirit Sine Dry,0.82,0.80,3.3783
    Bass Pum,0.53,0.46,1.8761
    Thirds Organ Pulsar Rhythm,0.50,0.50,1.0000
    End

    I wrote to Peter Chilvers, who is a mensch, and asked if there was any way to just import these text files. He replied that there unfortunately wasn’t, but suggested that if I still had access to a device that had the scapes on it, I could use the share feature and mail them one by one to my new iPad, where I could tap them in Mail to open them in Scape and then save them.

    At first I thought I was seriously out of luck, but then I figured, why not share one from the new iPad and see what was in the mail? I did, and found it was just an attachment of the text file, with a few hints to iOS as to what app wanted to consume them:


    Content-Type: application/scape; name="Scape 10";x-apple-part-url=Scape 10ar; name="Scape 10ar.scape"
    Content-Disposition: inline; filename="Scape 10ar.scape"
    Content-Transfer-Encoding: base64

    Fab, so all I have to do is look through five or six folder containing bunches of scape files that may or may not be duplicates, build emails, and…this sounds like work. Time to write some scripts. First, I used this script to ferret through the directories, find the scapes, and bring them together.


    use strict;
    use warnings;
    use File::Find::Rule;

    my $finder = File::Find::Rule->new;
    my $scapes = $finder->or(
    $finder->new
    ->directory
    ->name(‘Scape.app’)
    ->prune
    ->discard,
    $finder->new
    ->name(‘*_scape.txt’)
    );
    my $seq=”a”;
    for my $scape ($scapes->in(‘.’)) {
    (my $base = $scape) =~ s/_scape.txt//;

    my $title;
    open my $fh, “<“, $scape or die “can’t open $scape: $!”;
    while(<$fh>){
    chomp;
    next unless /Title:(.*)$/;
    $title = $1;
    last;
    }
    $title =~ s[/][\\/]g;
    if (-e “$title.scape”) {
    $title = “$title$seq”;
    $seq++;
    die if $seq gt “z”;
    }
    system qq(mv “$scape” “$title.scape”);
    system qq(mv “$base.jpg” “$title.jpg”)
    }

    I decided it was easier to do a visual sort using the .jpg thumbnails to spot the duplicates and filter them out; I probably could have more easily done it by checksumming the files and eliminating all the duplicates, but I wanted to cull a bit as well.

    So now I’ve got these, and I need to get them to my iPad. Time for another script to build me the mail I need:

    #!/usr/bin/env perl

    =head1 NAME

    bulk_scapes.pl – recover scape files in bulk

    =head1 SYNOPSIS

    MAIL_USER=gmail.sendername@gmail.com \
    MAIL_PASSWORD=’seekrit’ \
    RECIPENT=’icloud_user@me.com’ \
    bulk_scapes

    =head1 DESCRIPTION

    C will collect up all the C<.scape> files in a directory
    and mail them to an iCloud user. That user can then open the mail on their
    iPad and tap the attachments to restore them to Scape.

    This script assumes you’ll be using GMail to send the files; create an app
    password in your Google account to use this script to send the mail.

    =cut

    use strict;
    use warnings;
    use Email::Sender::Simple qw(sendmail);
    use Email::Sender::Transport::SMTP;
    use MIME::Entity;

    my $top = MIME::Entity->build(Type => “multipart/mixed”,
    From => $ENV{MAIL_USER},
    To => $ENV{RECIPIENT},
    Subject => “recovered scapes”);

    # Loop over files and attach. MIME type is ‘application/scape’.
    my $n = 1;
    for my $file (`ls -1 *.{scape,playlist}`) {
    chomp $file;
    my($part, undef) = split /\./, $file;
    open my $fh, “<“, $file or die “Can’t open $file: $!\n”;
    my $name;
    while(<$fh>){
    next unless /Title/;
    (undef, $name) = split /:/;
    last;
    }
    unless ($name) {
    $name = “Untitled $n”;
    $n++;
    }
    close $fh;
    $top->attach(Path => $file,
    Type => “application/scape; name=\”$name\”;x-apple-part-url=$part”,
    );
    }

    my $transport = Email::Sender::Transport::SMTP->new(
    host => ‘smtp.gmail.com’,
    port => 587,
    ssl => ‘starttls’,
    sasl_username => $ENV{MAIL_USER},
    sasl_password => $ENV{MAIL_PASSWORD},
    );

    sendmail($top, { transport => $transport });

    I was able to receive this on my iPad, tap on the attachments, and have them open in Scape. Since there were a lot of these, it took several sessions over a week to get them all loaded, listened to, saved, and renamed using Scape’s edit function (the titles did not transfer, unfortunately).

    So now I have all my Scapes back, and I’m working through the program, trying to get to the point where I have all the objects enabled again. I haven’t played with it in a while, and I’m glad to be rediscovering what a gem this app is.

  • High Sierra Wifi Poor Performance Fix for 2010 MacBook Pro

    I’ve been working remotely at an AirBNB this week and was having a really frustrating time of it. The 2010-vintage MacBook Pro I have would connect to the Wifi, go for awhile — sometimes a half-hour, sometimes not more than a minute, — and then drop the connection. Shutting off wireless and reinstating it would restart the connection, but it would be unstable and drop again. The length of time it would stay connected was completely unpredictable, and whether or not it would reconnect, and how long it would take was also completely random.

    I was getting speed test results of 0.15 MB/s up and 0.18 down. This was unusable, and I fell back on my hotspot for any sustained connection. Weirdly, I could connect fine with the Amazon Dot I’d brought along – flawlessly, in fact. What was going on?

    Late Friday evening, after a particularly frustrating session attempting to get Netflix to work (I really wanted to see Disenchantment — great show, by the way!), I started doing some research and came across an article that recommended reducing the MTU for the wireless device to 1453 (from the default somewhere in the 1500’s). Really? Okay…

    Magic. It has now been solid for several hours, including streaming video. If you’re having any trouble at all, I’d recommend at least trying it. The article shows you how to set up a separate “location” with the different MTU, so it’s simple to switch it on or off as you choose.

    Update: 12 hours later, I’m getting terrible performance again. A little more searching turned up a tutorial on readjusting the MTU to optimum with ping. Reset your MTU size to default, then starting at your 1500, try the following commad (replacing mtusize with the actual number!):

    ping -D -s mtusize -c 2 google.com

    If you get “message too long” in the ping output, drop the MTU size a bit a try again. If you have no idea what MTU size is good, start at 1500, which will be too big, and go down by 100s until you start seeing “xxxx bytes from google.com:…” messages, which let you know your ping is getting through. You can then go up by tens until you get “message too long” again, then back down by 1’s until you find the maximum MTU size that doesn’t get “message too long”.

    I had to reduce my MTU size further to 1425, and I’m near 10 megabits/second again.

  • Necessary steps to get the GitLab Rails template app running on OS X

    The GitLab template is a great way to get started on a new Rails app – it sets up most of what you need simply by checking out the code, but there are a few things you need to do if you’re starting fresh on a
    new Mac.

    The template app assumes you’ll be using Postgres for your database. I recommend sticking with this; if you grow your app past the proof-of-concept step, you’re going to want it configured for a robust and responsive  database. I love SQlite, but it’s not that much harder to go ahead and get Postgres running, so let’s do that instead of downgrading the template.

    If you have Homebrew installed, skip down to the brew install postgres section. If you don’t, run the following command:

    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

    Wait for a while for that to finish, and Homebrew will be installed. Once that’s done you can move on to installing and configuring Postgres:

    brew update
    brew install postgres
    initdb /usr/local/var/postgres -E utf8

    Now you need to configure your databases. The initdb above sets up your user ID as a root user to start, so you can now add the application’s user and databases. (Substitute an appropriate username for yourappdev).

    psql postgres
    create role yourappdev with login password '...some password here...';
    alter user yourappdev createdb;

    Now exit psql (^D will do) and log back in again as the yourappdev user to create the development and test databases for your Rails app. If you set a password, psql will prompt you for it. (If you forgot it, your root-level user created when you installed Postgres can still log in without a password at the moment.)

    psql postgres -U yourappdev
    create database yourapp_development;
    create database yourapp_test;

    Securing the production database

    You now need to create the database that you’ll run in production. Remember this is the one with the important user data in it that you don’t want to lose, so we want to create a whole separate username that will be the creator of the database and give it a good strong password that you record in your password manager. (If you don’t have a password manager, get one! It’s way safer than writing it down on a sticky and putting it in an envelope.)

    psql postgres
    create role yourapp with login password 'a very strong password please';
    alter user yourapp createdb;
    ^D
    psql postgres -U yourapp
    create database yourapp

    You’re now ready to work on your Rails app. When you want to run the production version, you’ll need to set the DATABASE_URL environment variable like this:

    DATABASE_URL=”postgres://yourapp:strongpassword@localhost/yourapp”

    Further deployment issues are beyond the scope of this post.

  • HTTPS upgrade completed

    That was actually pretty painless.

    Hostgator (love y’all!) now provides a per-site HTTPS cert for free, so I didn’t have to use Let’sEncrypt for it; I just needed to install the Really Simple SSL plugin, back up my database, and turn it on to get SSL working.

    Highly recommended if your site isn’t a complicated one.

  • VCVRack NotStraightLines plugin for Rack 0.6.0

    This zip file (NotStraightLines-0.6.0-mac) is a Mac version of the NotStraightLines plugin. To make it super clear, this is all Andrew Belt’s work; I just built a binary!

  • Followup on Go dependency Jenga

    I was finally able to build a working version of the glide.yaml file for my project and convert it to dep. The items of note:

    • To get openzipkin to work, I needed to specify
      [[constraint]]
        branch = "master"
        name = "github.com/opentracing/opentracing-go"
      
      [[override]]
        name = "github.com/openzipkin/zipkin-go-opentracing"
      
      
    • To get logrus to work, I had to change it to github.com/sirupsen/logrus in all my code and specify
      [[constraint]]
        name = "github.com/sirupsen/logrus"
        version = "^1.0.5"
      
      
  • Desperate times, desperate measures: dealing with a Go dependency Jenga tower

    Desperate times, desperate measures: dealing with a Go dependency Jenga tower

    TL;DR

    If you absolutely have to manually update your glide.lock file to add a specific SHA1 for a dependency and can’t do it right with glide update, edit glide.lock as needed, then:

    go get mattfarina/glide-hash
    glide hash

    This gets the correct checksum for your glide.lock file; update the hash: line at the top. You can now glide install without warnings.

    The detailed explanation

    Our microservices have a number of dependencies, one of which is logrus. Logrus is a great logging package, but was the trigger of a lot of issues last year when the repository was renamed from github.com/Sirupsen/logrus to github.com/sirupsen/logrus.

    That one capitalization change caused havoc in the Go community. If you don’t understand why, let’s talk a little about dependency management in Go. (If you do, skip down to “The detailed fix”).

    Go doesn’t have an official dependency management mechanism; when you build Go code, you pretty much expect to compile all the code it will need at once. Go goes have a linker, but generally we really do just build a single static binary from source files, including the source of libraries too. The Go maintainers decided that it’s simpler to store one set of source code to be pulled in and complied rather than store compiled libraries for multiple architectures and figure out which one needs to be pulled in. The Go compiler is pretty fast, and maintaining multiple native binary versions of libraries is hard.

    Originally, all source management was done with go get, which would fetch code from a VCS endpoint and put it in the appropriate place in the GOPATH (essentially the location where “stuff related to but not part of this Go program” lives) so that it could be picked up during a compile. This is super simple, but fails in a number of ways: a set of go get commands are a set of commands, and have to be run before the program can be built. This may not be reproducible (if someone makes a new commit to the library, the HEAD changes). Telling go get to fetch a specific version of a library is harder to do. go get is great at pulling a specific isolated library, but not good at managing transitive dependencies: e.g., we’ve installed library foo, but it needs library bar to perform some functions, and bar needs baz to do some of its work. We’d really like to see all of these figured out and installed at once, and to not have to remember what all the dependencies are, or to have to have a script to run to fetch them. We’re potentially running on multiple architectures, and we don’t want to have to maintain multiple executable scripts just to fetch our dependencies.

    Go’s first cut at solving this was the vendor directory. This directory lives in the same tree as the Go source and can be committed to the VCS, so one could get the required sources into the vendor library, then commit the “known-good” version. This works for the versioning problem, mostly, but means that it’s easy for many slightly different versions of those libraries to end up spread across multiple source code repositories, and keeping them synced up for fixes is difficult, and it doesn’t address the transitive issues at all. To fix this, the Go community built unofficial source management tools to handle versioned access to the vendor directory plus automated detection and resolution of transitive dependencies.

    The problem is that because the Go community is large, inventive, and active, we have a lot of them. We’ve already used two different tools: Godep and, currently, glide, and are probably going to switch to dep, which looks to eventually be the standard dependency management tool blessed by the Go core team. [Update: wrong again. go mod is the current official winner.]

    glide (our current tool, as noted) manages dependencies with two files: glide.yaml, which describes enough of the direct dependencies and their versions that all of the dependencies and their own transitive dependencies can be figured out. The glide.lock file stores the results of this dependency resolution as specific VCS commits (SHA1 hashes in the case of Git), allowing us to quickly fetch exactly what we want when getting ready to compile the code.

    Like any other piece of software, the glide files have to be kept up to date, especially if there are dependencies on outside libraries (from Github and the like) by periodically doing a glide update to update dependencies in the glide.lock file that aren’t locked to a specific version (or range of versions) by glide.yaml. If one falls behind on this, or a change such as the Sirupsen/logrus to sirupsen/logrus one happens, or you simply need to upgrade something to a new version, these files can end up in a state where a glide install still works, because this simply downloads the revisions dictated by glide.lock without attempting dependency resolution again, but glide update doesn’t, because the glide.yaml didn’t limit the possibilities enough, and attempting resolution of the dependencies fails.

    To fix this, we can do it one of two ways:

    1. The right way, which entails plodding through all the revisions until we’ve found a new set that works, fixed the glide.yaml file so that it defines that new set, and then used glide updateto download them and rewrite glide.lock. This can be excruciatingly difficult, as it’s possible that the updated glide.yaml will no longer resolve, or will resolve the dependencies in ways that won’t actually build, and there will have to be many update/download/compile cycles to actually fix the issue.
    2. The wrong way, which is to muck around with glide.lock directly, adding or changing something without making sure that glide.yaml “compiles” to the updated glide.lock. This gets us back on track with code that builds and runs, but leaves us in the dangerous situation that glide update is now broken.

    The detailed fix

    If you näively go the wrong way and just make changes to the glide.lock file, glide tries to be a good citizen and warn you that you’ve done something you ought not to:

    [WARN] Lock file may be out of date. Hash check of YAML failed. You may need to run 'update'

    appears when you glide install.

    As noted, the problem is that if you run glide update, you’ll break everything because you didn’t fix glide.yaml first. And maybe you just don’t have time to find the right incantation to get glide.yaml fixed just now.

    So, you lie to glide, as follows.

    1. Add the dependency to glide.yaml.
      • Edit glide.yaml and add the dependency plus its version if it has one. (Use master if you want to track HEAD or a specific SHA1 if you want to pin it to that commit.)

        - package: github.com/jszwec/csvutil
        version: 1.0.0
    2. Add the dependency to glide.lock.
      • This one must be the SHA1; the easiest way to get this is to go to the repository where it lives and copy it down. I won’t go into detail here, but however it works in your VCS, you’ll need the full SHA1 or revision marker.

        - name: github.com/jszwec/csvutil
        version: a9cea83f97294039c58703c4fe1937e57ea5eefc
    3. If we stopped at this point, we’d get a warning from glide install that would recommend that we use glide update instead to install the required libraries. In our case, with a delicate web of dependencies between local libraries and Echo, openzipkin and Apache Thrift, and the two different versions of logrus, a glide update breaks one or more of these dependencies when we try it. To prevent someone else from spending way too much time trying to resolve the problem by juggling versions in the glide.yaml in the hope of creating a stable glide.lock, we need to fix the computed file hash at the top of the glide.lock file so that the warning is suppressed.

      This is a hack! The best option is probably to import all the SHA1’s into the glide.yaml file as versions, ensure glide update works, and then gradually relax the constraints until glide update fails again, then back up one step.

      To calculate the hash, we can go get mattfarina/glide-hash, which creates a new glide hash subcommand that does exactly that and prints it on the console.

      We install the subcommand plugin as noted, then cd to the codebase where we need to fix the glide.lockfile. Once there, we simply issue glide hash, and the command prints the hash we need. Copy that, edit glide.lock, and replace the old hash on the first line with this new one.

      Warning!

      This is absolutely a stopgap solution. Sooner or later you’re going to need to update one or more of the libraries involved, and you really will want to do a glide update. Yes, you could keep updating this way, but it would be a lot better to solve the problem properly: go through all the dependencies, update the ones you need, and then make the necessary fixes so that your code and the library code are compatible again.

  • Postgres array_to_string() and array_agg() to decouple your interface

    Let’s say you’ve got a collection of job data for a set of users that looks like this, and you want to create a nice summary of it to be displayed, with a count of how many jobs are in each category for each user, all on one line.

     │id │ user_id │ job │ status    │
     ├───┼─────────┼─────┼───────────┤
     │ 1 │ 12      │ 1   │ Completed │
     │ 2 │ 12      │ 2   │ Cancelled │
     │ 3 │ 14      │ 3   │ Ready     │
     │ 5 │ 14      │ 4   │ Completed │
     │ 6 │ 14      │ 4   │ Completed │
     │ 7 │ 14      │ 4   │ Cancelled │
     ...

    Here’s  the report of summarized statuses of jobs for each user that you want.

    │ user_id │ summary                           │
    ├─────────┼───────────────────────────────────┤
    │ 12      | 1 Cancelled, 1 Completed          │
    │ 14      | 1 Cancelled, 2 Completed, 1 Ready │

    I’ll show you how it’s possible to provide this solely with a Postgres SELECT.

    (more…)