Problem: two strings. Are they anagrams of each other?
Software engineer hat again: if these are arbitrary Unicode strings, then the right solution is count the characters in the first string, binning the counts by character, and then subtract the counts of the characters in the second string. If the characters are ASCII, then you can be clever and use the character value as an index into an array and do the same add/subtract algorithm…but someone will use a non-ASCII character, sooner or later. And the failure mode of clever is…well.
Score: bottom 15% in time and space, but I’ll take that for the vastly reduced need to worry about weird input. If we remove the conversion to string and use bytes, then it improves to 30%. The real win doesn’t happen until the conversion to arrays. I initially had a delete to remove counts that went to 0 and a check for counts that went negative (chars in t not in s) but there wasn’t a significant speedup.
func isAnagram(s string, t string) bool {
// easiest way is to map the character counts into a hash.
// if the two strings match, there should be no odd counts.
if len(s) != len(t) {
return false
}
charCounts := make(map[byte]int)
for i := range(s) {
charCounts[s[i]]++
}
for i := range(t) {
x := string(t[i])
charCounts[x]--
}
for _, count := range(charCounts) {
if count != 0 {
return false
}
}
return true
}
Giving up and using an array of 256 ints, indexed by the int value of the bytes, it’s much faster: 93% on time, 98% on space. Good enough. I suppose you could use this on Unicode input, since it’s just looking at raw bytes. Depends on whether you have a character that can be represented in more than one way in Unicode.
The syntax for creating arrays continues to be a problem. I’m going to standardize on assigning the type followed by {} to the variable.
I chose the “busy software engineer” approach: I know an algorithm that will work and is trivial to implement. Implement that.
To remind everyone what inverting a tree is: if criterion for insertion is left branch < right, then then output tree should have left branch > right. The dead simple algorithm is top-down: save the left, assign the right to the left, assign the saved to the right, recurse.
It’s clear, it’s simple, and fast to code. I literally had a working solution in 2 runs (forgot the null root case on the first run).
/**
* Definition for a binary tree node.
* type TreeNode struct {
* Val int
* Left *TreeNode
* Right *TreeNode
* }
*/
func invertTree(root *TreeNode) *TreeNode {
if root == nil {
return nil
}
invertTree(root.Left)
invertTree(root.Right)
t := root.Left
root.Left = root.Right
root.Right = t
return root
}
About as simple as you can possibly get. Scorewise, it’s fast: top 77%. Memorywise, it’s pretty poor: bottom 8%.
I’m betting a pointer crawl would use less memory but wouldn’t necessarily be that much faster. Let’s see.
First note, still writing append as if this were Scala. Sigh.
This try ends up in an infinite loop; I’m mismanaging the stack somehow. (See why letting Go do it is better?) I’m declaring this good enough and moving on. I expect I could write a version where I manage the stack myself, but this is why we invented higher-level languages.
Unless our trees are always incredibly deep, this is the right way to do this. Even skipping the recursion for nil subtrees doesn’t really make it faster.
On the other hand, I solved the initial recursive version in about 5 minutes, so I’m getting my feet back under me.
I’m perceiving a pattern in my failures here and it is 100% “did you absolutely understand the requirements”.
Detecting a palindromic string was the easy part: just put an index at the end and the beginning, and march them toward the middle. If they meet or pass each other, then we succeeded. If at any point the characters at the indexes don’t match, we fail.
The problematic part was the cleanup of the initial string. Lowercasing wasn’t bad, though I had to look up where the function lived (it’s strings.ToLower()). Filtering the input was harder. I had to go check Stack Overflow and found this code, which is very nice and will go in my personal toolbox:
The strings.Map function is really nice for “do something for every rune in a string”, especially since the special -1 return value says “drop this character”. Can do just about any transliteration you want with that! The unicode.IsDigit function highlights my “you did not read the instructions” error: I assumed that it would be alpha only, and that was incorrect; it was alphanumeric input.
I did spot a useful optimization: if the string is less than two characters long, by definition it’s a palindrome, so we can skip scanning it altogether.
Otherwise the algorithm I coded to scan was right the first time.
How’d I do comparatively? Meh. Runtime in the bottom 50%, memory in the top 60%. I think the lowercase followed by the cleanup is the culprit, and doing it all in one loop would be faster. I don’t think it’s as clear.
I tried a couple variations, and all of them are worse:
Reverse the string and compare it to itself
Inline the cleanup
This just may be something that Go kind of sucks at doing.
func isPalindrome(s string) bool {
// lowercase, skim out all non-alpha characters
s = strings.ToLower(s)
s = spaceMap(s)
if len(s) < 2 {
return true
}
begin := 0
end := len(s) - 1
for {
if begin > end {
return true
}
if begin == end {
return true
}
if s[begin] != s[end] {
return false
}
begin++
end--
}
}
func spaceMap(str string) string {
return strings.Map(func(r rune) rune {
if !unicode.IsDigit(r) && !unicode.IsLetter(r) {
return -1
}
return r
}, str)
}
You’re given an ordered array of prices, and are asked to find the maximum inter-day profit. Constraints: the buy must be before the sell (first price array index < second price array index), and profit is 0 if there is no positive delta.
I originally did a brute force solution that essentially came down to a double loop. Worked fine for small examples, but failed on time for long ones.
So I went and took a shower, and got half of it: if I find the minimum value, then no price after that can possibly have a bigger delta. However, it’s possible that there was a bigger delta preceding that, and I got stuck there for a while, wondering about iterating backward, but couldn’t see a definite siimple algorithm.
I got a hint that I could use a sliding window, and that gave me the right solution.
Start at index 0 as the tentative minimum.
Walk a second index forward.
If the walking index finds a new minimum, move the minimum pointer.
Otherwise, calculate the delta between the current minimum and the walking index price.
If the delta is > the old max delta, save it.
My original insight would eliminate whatever part of the array that followed the global minimum, but would have missed possible local maximum deltas (e.g., [10, 100, 1, 5] would have a true max delta of 90; scanning only after the global minimum would only have found a delta of 4). The sliding window finds that first delta of 90, and then keeps it, even though we find a new minimum at 1.
func maxProfit(prices []int) int {
// iterate over the array from the start.
// take the current item as the start price.
// find the max price after this one that leads to a profit, or 0.
// repeat for each day up to the last-1.
// if the profit for a given iteration is > the current, save it instead.
maxProfit := 0
tentativeMinIndex := 0
for i := tentativeMinIndex; i < len(prices); i++ {
if prices[i] < prices[tentativeMinIndex] {
tentativeMinIndex = i
continue
}
// haven't moved minimum, calc profit
thisProfit := prices[i] - prices[tentativeMinIndex]
if thisProfit > maxProfit {
maxProfit = thisProfit
}
}
return maxProfit
}
How’d I do? Bad on the initial O(N^2) iteration, but fab on the sliding window version: top 99% in speed, top 93% in memory.
Taking a shower helped, but I won’t be able to do that in an interview, alas. (Another good argument for working at home.)
Next is palindromes, and we’ll be back to fun with Go strings again.
Today’s challenge is Valid parens. And it reminded me of several of the things that I don’t particularly like in Go.
Problem statement boils down to “here’s a string of brackets: parens, square brackets, braces. Tell us if they match properly (true) or don’t (false)”. Algorthmically this is dead simple:
Start with an empty stack.
for each character in the string:
if it's an open bracket, push it.
if its a close bracket:
if the stack is empty, return false.
if the top of the stack isn't the matching open, return false.
pop the stack.
When we're done, return true if the stack is empty, false otherwise.
In Perl or Python, I’d have a stack pointer, and move it up and down an array as I pushed and popped characters by altering the stack pointer and assigning to the appropriate point in the array. If the stack pointer went below 0, I’ve gotten too many close parens, and I return false.
This approach doesn’t work well at all with Go, because Go arrays and slices Do Not Work Like That.
To push onto an existing array, we need to use a := append(a, new). Just assigning to an index outside of the slice’s current boundaries will panic.
We could still use the stack pointer to move backward, but then we’d have to have more complex code to decide if we need to append or just move the pointer. (Side note: doing it that way would probably use less memory — the working version of my solution used more memory than 90% of the other solutions). Instead, we just use the slice notation to pop the stack instead, with a := a[:len(a)-1].
My original iterations with a real stack pointer failed miserably because I wasn’t properly tracking the length of the array, and was perpetually getting the stack pointer position wrong, causing the code to panic. It was only after completely discarding the stack pointer that I got it to finally work.
Items of note:
I remembered I’d need to use range() to iterate over the string, but forgot that I would have to string() the characters I was getting so they were strings instead of runes. I wasted a good chunk of time trying to mess with runes before I realized that I needed string() instead. Runes are considerably more second-class citizens.
Still running afoul of using make() properly. Finally figured out the syntax to create an empty string array, but it took some Googling to get it.
I decided to make the code less complex by creating a pairs map that mapped left brackets to right ones. This meant I could check pairs[left] == right for whether I’d matched the left and right brackets. It also meant that if we ever added more bracket pairs in the future, it’d be be easier to implement them.
I got fenceposted by a[len(a)-1] accessing the last element, and a[:len([a)-1]dropping the last element. I naively expected that since a[len(a)-1] accessed the last element, I’d want a[:len(a)-2] to drop that last element, but that takes one too many, running me into another fencepost. Yes, it’s good that it’s symmetric, and now I’ll remember, but I definitely had to look it up to figure out both of them.
Forgot the check for “did we close all the opens”. I probably would not have missed it if I was making my own test cases, but it wasn’t in the tests. Note to self: watch out for poor initial test cases. There is an opportunity to add more, I think?
So how’d I do overall? We saw I was in the bottom 10% in memory usage, boo, but runtime? “Beats 100.00% of users with Go”.
I’ll definitely take that as a win, but it’s definitely obvious I need to keep practicing and getting these idioms back under my belt.
func isValid(s string) bool {
// We'll run through the string, doing the following:
// - if we see an open bracket, we stack it.
// - if we see a close bracket, we pop the stack. If the
// brackets match, we continue, otherwise we fail.
// - when we run out of brackets, the stack must be empty or we fail.
stack := make([]string,0)
pairs := map[string]string{
"(": ")",
"[": "]",
"{": "}",
}
for i := range s {
char := string(s[i])
_, ok := pairs[char]
if ok {
// left, push it
stack = append(stack, char)
} else {
// right, stack empty?
if (len(stack) == 0) {
fmt.Println("pop of empty stack")
return false
}
// Not empty. match?
if (pairs[stack[len(stack)-1]] != char) {
fmt.Println ("mismatch")
// mismatched bracket, fail
return false
}
// Match. Pop stack.
stack = stack[:len(stack)-1]
}
}
// Check for "did we close all the open brackets".
return len(stack) == 0
}
That’s one for today, and I need to clean the house, so I’ll call it a day.
Well, these were fun, and definitely showed me I’m not ready to walk into an interview at the moment!
Twosum
Twosum is an easy challenge: you are given an array of numbers and a target value; your job is to find the only two numbers at different indexes in the array that sum to the target value and return those two indexes.
The brute-force strategy just repeatedly walks through the array, trying to find the right pair of numbers. This ends up being O(n^2) because we essentially run over the upper triangle of a square of the elements of the array> If in the example below were looking for numbers that sum to 9, we’d run over the numbers four times finally finding it on the fourth pass, and doing 4+3+2+1 = 10 checks. (We can save operations by precomputing the second term in the sum: subtract the first search value from the target sum, and just compare that against the value in the array at that index instead of summing the two.)
>1 2 3 4 5
1 >2 3 4 5
1 2 >3 4 5
1 2 3 >4 5
But the old Perl adage “when someone says search, consider a hash” comes in handy here. We can precompute exactly the number we want, so if we can make the values the keys of a hash (and we can, as we know that there is only one pair that sums to the target value), then we can make the indexes the values.
We walk through the array twice:
The first iteration builds the hash by taking each value and inserting it as the key with the index as its value. And since we’re doing one iteration over the whole array anyway at this point, we can check each item as we insert it to see if it hits the target with the first item!
If we don’t luck out during the insert, then we iterate over items 2 to N, calculating the value we’d need to hit the target, and then doing a hash lookup to see if it’s there The hash lookup is O(1), and the pass over the array (including the load) is also O(1), so we’ve made a pretty good reduction in overall runtime.
Things I’d forgotten in the last year of pretty much nothing but Scala:
The only loop structure in Go is the for loop.
var, not val. val is Scala, and as one tries for as much immutable data as possible in Scala, my fingers are trained to type val at every turn.
Go array initializer syntax. Square brackets, then the type, then the values in curly braces.
I remembered I needed to make() a map, but used Scala syntax instead of Go syntax trying to specify the types.
Go does not consider [2]int to be the same as []int. Fair.
Gotta have the parens for every function. Perl and Scala made me sloppy about that.
Things discovered:
Adding the “try to catch it in the load pass” made a significant speed difference when the first term was at index 0.
Putting the first index in the array initializer for the result array instead of assigning it vastly increased the speed of the search loop — I went from the top 75% to the top 95% with that one change.
The hash solution uses more memory; I was at the 15th percentile on memory, but I’ll definitely take being faster than 95% of the other solutions as better.
I missed the “must be different indexes” restriction on my first pass; fortunately the second test case was for 6 with [3, 4, 2], and I got a test fail for [0,0].
Here’s the final version. I do comment even when writing these because I can line out the code with comments and put the actual code itself in later. Writing down the assumptions helps clarify them as well (thanks loads, ADHD).
func twoSum(nums []int, target int) []int {
// Given that
// - there is only one solution, then all entries must be unique
// - therefore they can be used as keys to a hash
// - this means we can iterate O(1) over the array for term 1
// and lookup term 2 in the hash, getting its index. No need
// for a linear search for it.
// Fill the hash...and see if we can catch the solution
// during this pass, since we have to do it anyway.
locations := make(map[int]int)
prematch := target - nums[0]
for i := 0; i < len(nums); i++ {
if i > 0 && nums[i] == prematch {
return []int{0, i}
}
locations[nums[i]] = i
}
// scan the nums, looking for a match.
// first match wins.
for i := 0; i < len(nums); i++ {
result := []int{i, -1}
term2 := target - nums[i]
j, ok := locations[term2]
if ok {
// Disqualify same-entry match. Missed this first time,
// thank you test cases.
if i == j {
continue
}
// Found
result[1] = j
return result
}
}
// This is a "should not get here"; meant to show me if I've totally
// blown the loop.
panic("did not find a solution")
}
Add two numbers
This one was recommended by leetcode as a “hot” problem, and it looked like fun, so I did it. It’s not in the Grind 75 list, but it’s a nice pointer manipulation problem, and I was a bit stale on pointers.
The function is passed two linked lists; each list has an integer from 0-9 and a pointer to the next digit. The digits are presented in right-to-left order. The function is to take these two lists, and return a new list that represents the sum of the two numbers. Of course, the lists can be different lengths; if they are, then the list that runs out first is treated as if it were returning zeroes for the purposes of the sum.
It was instantly clear that the challenge here would be maintaining the scan pointers for the two numbers and the result, and that trying to do bookkeeping for all of it in the loop would suck. So my approach was to construct a function to return a closure that would follow the rules for reading the numbers off the list:
If the scan pointer is null, return 0 and false
If the scan pointer is not null, return the node’s value and true, advancing the pointer to the next node.
This meant that the list management was dead simple:
first := iterator(L1)
second := iterator(L2)
fVal, fState := first()
sVal, sState := second()
Each call to the first and second closures returns the next value to use in the sum; when both are false, the sum is complete.
So all I had to manage in the loop was the scan pointer for the sum, and the carry from the addition operation: adding it in, clearing it, and recalculating it after the current operation.
Things learned/relearned:
Had to remember when I needed * and when I didn’t for variable types.
The first run segfaulted. I realized that I’d mismanaged the scan pointer; I needed to save the root node, and then scan forward while doing the operations.
Once I fixed that, I found out I wasn’t clearing the carry. Whoops. Easy to fix.
The closures worked perfectly.
The “end the loop” and “restart the loop” keywords are break and continue. Trivial, but I had to go look it up.
I did make one major mistake: I missed that the output was supposed to be another list, and started calculating the sum as an integer. This wasn’t too terribly off from the desired solution; I had figured out that I’d need the carry tracking, and I had a power variable that I was multiplying by 10 to switch columns in the output variable, but that probably wasted 5 or 10 minutes and might have made the difference between a pass and a fail in an interview.
It was pretty easy to switch to the list building, but lesson hammered home again: be sure you read the problem statement correctly and know what the output is. Confirming with the interviewer that I got the details right is probably a good idea in a timed problem.
**
* Definition for singly-linked list.
* type ListNode struct {
* Val int
* Next *ListNode
* }
*/
func addTwoNumbers(l1 *ListNode, l2 *ListNode) *ListNode {
// The tricky bit with this one is going to be watching out
// for the end of the list.
// the dumbest and easiest way is to have a bool for each
// list; when one list has no more entries, we set its bool
// to false and return a zero value for it.
// when both lists return a false value, we're done.
first := iterator(l1)
second := iterator(l2)
var currentDigit *ListNode
var root *ListNode
carry := 0
for {
fVal, fState := first()
sVal, sState := second()
if (!fState && !sState) {
// run off the end of both. Stop loop, finalize sum.
fmt.Println("Done")
break
}
// At least one still returning a value. (The other returns 0
// if there's no value.)
// Sum the digits and the curent carry; if > 9,
// take mod 10 and set the carry to 1. (no two
// digits can sum to > 18).
digitSum := fVal + sVal + carry
carry = 0
if digitSum > 9 {
carry = 1
digitSum = digitSum - 10
}
// Add space for a new digit, append it, continue.
if currentDigit != nil {
currentDigit.Next = &ListNode{digitSum, nil}
currentDigit = currentDigit.Next
} else {
// Create and save root digit
currentDigit = &ListNode{digitSum, nil}
root = currentDigit
}
}
if (carry != 0) {
// last addition had a carry we need to keep
currentDigit.Next = &ListNode{carry, nil}
}
return root
}
func iterator(l *ListNode) func()(int, bool) {
// Close over the pointer.
p := l
return func()(int, bool) {
if p == nil {
// Reached end. Keep returning 0 and false.
return 0, false
} else {
// Capture next digit, advance pointer,
// return next digit and true. If new pointer
// is nil, we'll just return 0 and end-of-list signal
// forever.
v := p.Val
p = p.Next
return v, true
}
}
}
So how’d I do overall in comparison? Pretty fuckin’ good. I was in the top 95% on speed, and the top 91% in memory use. I suspect that managing all the bookkeeping in the loop might make it a tad faster (no call overhead for the closures), but at the cost of significantly more complexity. This solution is fast and small, and I’ll take it.
Conclusions
My Go is pretty good. I didn’t spend a huge amount of time chasing bugs, and had it in a couple tries. I have a good grasp of the concepts and know mostly how to do things.
My Go is fairly stale. I don’t remember how to say things!
I had to look up a lot of things that I should have just remembered, like len() for array length (not.len, that’s Scala!). I need this practice!
Most people do a novel for NaNoWriMo, and that’s a great thing.
I am not a great fiction writer; maybe I’d be better if I practiced, but right now, writing code makes more money, so I’m going to spend the month of November working through the Grind 75 leetcode practice set.
I would very much prefer a Perl job, but there just aren’t that many places now that really want someone whose primary is Perl. Python’s fine, but it’s mostly tied up in LLMs and AI at the moment, neither of which I actually find very interesting. (They seem to be more “build a model and hope it does the work you wanted” as opposed to “write the code to do the job you want done”, which I find much more satisfying.)
I haven’t done much with Rust yet, and I think it’d be a fun language to work in, but I don’t see a lot of demand for it yet. Scala is fun to write but I’d rather jump out a window than use the JVM toolchain again. (This also crosses off Java, Dart, and Flutter for me as well.) I have not in general found C++ to be that much fun, and Javascript is what it is. I can write it, I just don’t enjoy it.
So it comes down to two languages, really: Go and Swift. I enjoy coding in both (for different reasons), but at the moment I’m seeing more Go jobs than Swift ones, though that may just be my settings. Certainly when you need a Mac/iOS programmer, you need someone who at least understands Swift.
So I’ll probably end up trying it with both, and seeing how it goes. Onward!
A lot of progress on multiple fronts to get me to the point where the the models are tested and working, and the registration controller works when manually tested. Its support functions have been unit tested as well.
Settings model
Working on through the code, I found and fixed a couple typos (SearchPage, not SeatchPage!), and then put all the model functions under test. The settings are properly initialized with sensible values for the fields that can be pre-initialized, and the load and save functions work. Overall I found the Mojo ORM to sometimes work very intuitively, and other times to be opaque enough that I just fell back to creating and running placeholder queries.
To prevent multiple sets of settings from being created, the model always sets the ID of the settings record to 1, guaranteeing that we have only one set of settings.
User model
We have the usual suite of methods for a user: do they exist, are they verified, can they log in. I decided to use Crypt::Passphrase as the hashing function and manage passwords myself instead of using the Mojo plugin. Since this is all inside the model, it’s not terrible if I decide to change it later.
Originally I thought that I should probably block multiple IDs from the same email, but I decided that I would allow it so that people could have IDs with multiple privilege sets. This becomes more necessary if the folks using the wiki start using access lists, especially if there are disjoint groups with people in more than one of them. Again, another decision that’s easy to change if I do change my mind.
The principle problem I had here was that I had set up the user table with an id primary key, but a lot of operations depend on using the username as a key. SQLite can do multiple primary keys, but the Mojo SQLite module doesn’t. It was easier to write a method that does a SELECT * FROM users where username = ? and returns the user than try to work around it.
The initial version didn’t have any password constraints; I added a function that does some very basic ones (> 10 chars, at least one upper, one lower, one digit, one special character. I used the POSIX character classes to try to start toward a more UTF-8-ish future.
The tests were getting crowded, and a bit too long, so I reorganized the tests by creating a subdirectory for each model and controller, and copying the appropriate tests into it. I then went through the tests, making multiple copies, stripping each one down to testing just one thing. I last added a new test to verify that password validation worked, including whether the user was verified or not (not verified == failed login).
Controllers
Next was refining the controllers.
I reworked the login controller to use the User model instead of doing everything itself. and set it aside for the moment.
The rest of this sprint went into the registration controller; I wanted to be able to add users natively before testing the login controller. The only tweak it needed to be able to just run was to add in an import of the Users model so it could indeed create users.
A quick run showed me that I’d need to shuffle around the template and work on the validation code; the fields were fine for the original demo, but there were ones I didn’t need (middle name), ones that were missing (username), and the “flash” error message from validation was at the bottom of the page. Swapped things around and everything looked good.
I then went in and disassembled the logic of the controller into input validation, username creation, and the actual messing about with the User model. I left the model manipulation inline, but thinking again about it, I think I want to isolate that in a separate method and unit test that as well.
Wrote the tests for both of those, and did some cleanup on the error messaging in the validation. It now gathers all the validation errors and constructs a properly-punctuated list:
The item alone, capitalized, if there’s one field wrong.
"Item_one and item_two" (no serial comma) if there are two wrong.
"Item_one, Item_two,...Item_n_minus_one, and item_n" if there are more than two.
I decided to just grab the fields and drop them into a hash, then pass the hash to the utility functions; this actually led to a lot of impedance matching problems, and it might have been better to build a tiny class to carry the data instead. (If I were doing this in Go, I’d use a struct and be sure that I couldn’t assign to or use the wrong field names because the compiler would catch me.)
The username construction adds a very nice module, Text::Unidecode, which does a pretty decent job of translating Unicode characters into an ASCII-coded equivalent. I decided to do this to preserve the simplicity of the link checking code; later on, when we get to the page decoding, that code will look for words that match the wiki link syntax and automatically transform them into links to a page of the same name. Making the link syntax more complex would mean that it would be easier to accidentally create links; it’s possible to use `` before a linkname to prevent this, but having to do that a lot makes using the wiki less pleasurable.
The decoded strings sometimes contain spaces, so a s/\s(.)/uc($1)/eg was needed to collapse the space and capitalize the letter after it.
I tested this a little bit manually as well, and the controller seems to work fine. The page is a little stark, but can probably be made nicer with some CSS later.
Registration controller’s going to need integration tests to cover the rest, and the login controller has exactly one line that isn’t directly concerned with logging the user in, so it’ll need integration tests as well. Next up is those integration tests and then I’ll start on the actual page display code. Most of the rest of the functionality is inside special processing attached to pages.
Last month, while working in the Perl debugger, I came across this strange behavior:
DB<4> l 1.2
1.2: print "this is a placeholder";
Well, that doesn’t look right.
DB<5> l
2.2
3.2 sub lorem {
4.2: for(my $i = 1; $i < 100; $i++) {
5.2: print <<EOF;
6.2 Lorem
7.2 ipsum
8.2 dolor
9.2 sit
10.2 amet,
11.2 EOF
Okay, that really doesn’t look right.
DB<5> l 10.0.0.5
10.0.0.5 amet,
…I think there’s definitely something wrong here.
l $17$
Scalar found where operator expected (Missing operator before "$;"?) at (eval 153)[/Users/joemcmahon/perl5/perlbrew/perls/perl-5.38.0/lib/5.38.0/perl5db.pl:742] line 1, near "$17$;"
Oh, no. No no no.
So I dug in to the debugger code, and found this line:
This is the code that matches the arguments for the debugger’s l command for either l start,stop or l start-stop. The problem is that there are a lot of things that this matches that absolutely do not make any sense, like
l ......
l -.
l $
l -.$.
l -12
1 3.14
l 127.0.0.1
l 17$
Worse, as we saw above, some of them kind of work. Others…just don’t do anything, but are accepted.
What causes this?
First question: why does this work at all? The basic reason is that Perl is all about implicit conversions. Several of those expressions evaluate to 0 in a numeric context, so the debugger effectively executes a l 0. This doesn’t list anything, because the program source in the debugger’s magic source code array starts at line 1. But Perl arrays start at 0, and therefore index 0 in the magic source array exists; its just empty. So the debugger happily lists nothing.
For the floating point numbers and IP addresses, Perl captures the list command argument as a string, and then uses it in a loop that uses the “line number” as an index into that same magic source code array, incrementing it by 1 each time. The floating-point-looking strings convert to an actual float, which can be incremented by 1, and the index into the array is automatically truncated to an int when we try to use it. So the debugger finds the lines in the array, and prints the current value of the index it’s using as the line number, resulting in floating-point line numbers.
So why is this complicated regex here at all? Why aren’t we being a lot more strict about this?
Digging into the commit history
I sat down with the perl5 repo from Github and went spelunking. Going back through the commits shows that exact pattern’s been there for over 30 years – since Perl 3. It’s just been cargo-culted forward since then.
Let’s establish some history before we do anything else.
(Aside: this was an interesting exercise, and I learned a bit more about navigating a very long and complex commit history with git. The big thing to keep in mind is that you can navigate much more easily through time with git reset --hard than with git checkout. I still ended up hitting a breaking commit of the OS/2 Makefile in there that prevented me from jumping all the way back to HEAD, and…actually kind of breaks the repo. There’s probably an easy way to repair it, but I ended up deleting the repo altogether and recloning to fix it.)
Here’s a summary of the evolution of the debugger and its l command over time, all the way back to the original creation of the debugger. I got this by repeatedly using git blame to find the commit that added a change to the line with the regex, and then git reset -hard commit^ to jump back in time, repeating until we get so far back that there’s no debugger anymore.
Commit
Version and notes
Range-match regex
a559c25918b
perl 1.0 patch 8
This is the original perldb commit. Parse loop is a for(;;) starting at line 99.
l min-max List lines, lines 131-135 l line List line, lines 137-140 l List the whole program, lines 141-146
Pattern is very simple, but doesn’t outlaw non-digits in the range.
/^l (.*)[-,](.*)/
13281fa4f8
perl 2.0 patch 1, still perldb at the top level of the source directory
parse loop is still for(;;) starting at line 100
No changes in the l command support; lines have moved down 1. Pattern for range is the same.
Same
a687059cbaf
perl 3.0, moved to lib/perldb.pl
Command parse while() loop at line 64.
Command parsing depends heavily on fallthrough to work.
Lines 145-157 handle l subroutine_name with @sub, falling through to range code to do the command. Lines 166-168 handle bare l: set up l start-end and fall through Lines 169-173 handle l start+number: set up corresponding l start-end and fall through Lines 174-168 actually do the l start-end
First appearance of the complex match including $ and .. Only . gets special handling (current line). No leading - accepted yet, and no handling of $, though the match accepts it.
Command parse now a while loop with CMD label at line 90; command use next to restart the REPL.
No changes to the supported commands from the 3.0 debugger. Patterns now use \b to delimit the command from its arguments, but the actual commands remain the same; processing still uses fallthrough as before.
Lines 201-219 handle l subroutine: find in @sub, stack l start-end, fall through. Lines 231-235 handle l nnnn+mmm: compute end for l start-end, fall through. Lines 236-248 handle l range
Complex match is extended to match all of the arguments. $ and . accepted.
/^l\b\s*(([\d\$.]+)([-,]([\d\$\.]+))?)?/
d338d6fe1df
perl 5.0, “Ilya’s new debugger”, lib/perl5db.pl
l min+incr List incr+1 lines starting at min. l min-max List lines min through max. l line List single line. l subname List first window of lines from subroutine. l List next window of lines.
Lines 377-398 handle l subname by looking up the subroutine in the debugger’s magical structures, stacking an l start-end and falling through. The w and - commands are implemented as special cases of l, stacking an l range command and falling through. The l start+n command is also implemented with a stacked l range fallthrough. Lines 415-432 actually implement the l range command. . is still specially processed as current line number, but $ is still ignored.
Complex match is unchanged.
Same
54d04a52ebe
perl 5.0 beta 3
Same commands as last change.
l command code does not change in any significant way. The handling of the end of the range changes slightly, but there’s no new special case support.
First appearance of the leading - in the complex pattern. Nothing added to specifically support it.
/^l\b\s*((-?[\d\$\.]+)([-,]([\d\$\.]+)?)?/
492652be590
perl 5.7.2
Major reorg of commands. Fallthrough code in mainline CMD loop moves to cmd_l.
LIne 1687 l – lists whole file (first occurrence) Line 1688 l $name for subref list command. Line 1696 implements l function_name Line 1724 implements bare l command Line 1728 implements l line+nnnn Line 1734 retains the complex match for l start-end and l start,end. . is still special-cased. No change to the regex.
/^((-?[\d\$\.]+)([-,]([\d\$\.]+))?)?/
eda6e075b0c
perl 5.9.X
Big documentation patch, no functional changes. Code moves a bit, but doesn’t change at all.
Same
d12a485175b
perl 5.9.X
Retracts erroneous doc patch. No functional changes.
Same
69893cffeb6
perl 5.9.x
Rearranges things a bit to minimize delta between 5.8 and 5.9 debuggers, but no functional changes.
Same
e22ea7cc8db
perl 5.9.x
Revert previous patch.
Same
613bf352ae2
perl 5.10.0
“Extract _cmd_l_range“.
Beginning of the modern debugger architecture. Regex is unchanged, but the code is reorganized like this: 1) The regexes have been updated to remove the leading command character. 2) cmd_l parses the different commands, and each is handed off to a specific implementation routine. The complex regex routes to _cmd_l_range. 3) . is processed as current line in _cmd_l_range. $ is still unprocessed.
/^(?:(-?[\d\$\]+)(?:[-,]([\d\$\.]+))?)?/
be43a6d374e
Minor changes to remove extra comments and streamline names. No other changes.
Same
401da5225b7
More mostly-cosmetic changes. No significant logic changes.
Same
b7a96fc9f53
More code rearrangement and streamlining of cmd_l. No logic changes.
Same
Change history for the debugger l command.
So there are essentially four eras in the evolution of the debugger:
Era 1, the initial version in Perl 1 and Perl 2, which used an explicit simple range match.
Era 2, the appearance of the complex match in Perl 3.0.
Era 2.5, the transition to Perl 5.
Era 3, the extraction of commands in Perl 5.7.2 to separate subs.
Era 4, the modern (and testable) command architecture in Perl 5.10.0.
From looking over all these changes, it seems to me that what happened was that the pattern was intended to be a catchall for all the range-ish commands, but it wasn’t explicitly broken up so that it was clear exactly what those commands were supposed to be. My best guess is that Larry intended to implement l $var somehow in Perl 3; I believe assigning globs sort of worked like references at that point, and it would probably have been possible to expose the sub slot of the glob to the debugger…but this never happened.
Since the handling was all in one block with fallthroughs, creating the “match all the possible things” regex makes more sense in that context; I’m again guessing that the code was just going to ignore nonsense the way it does now, and that no one was expected to be silly enough to type floating point numbers or IP addresses at it.
The implicit conversion to integer when these were used as array indexes into the magic line array meant that it worked “well enough”. The increment of the variable containing the current line number cleaned it up enough that really dumb stuff like IP addresses and strings of periods were handled by implicit conversion to integers when looking for lines and didn’t cause enough of a problem to require that it be fixed.
I suppose you could be lazy and call it an “Easter egg”, but having all the weird things accepted feels sloppy.
What’s the “right thing”?
My best guess at what’s actually supposed to have been implemented sometime between Perl 3 and now is
l – : list the whole file. This does work, but probably should be called out as a specific case.
It’s likely that l -nnnn (list from here to line nnnn) was intended to work, since the match is there for it, but doesn’t.
It would seem logical that l nnnn- (list from line nnnn to EOF) should also be wanted and work, but also doesn’t.
Variations on using . as one or the other of the arguments: l nnnn,., (list from some previous line to the current one) and l .,nnnn (from here to some later line) probably ought to work. l .+5 should also work. Either these don’t work at all, or don’t work as one would expect.
Technically, l .,. should be legal and work, and do the same thing as l ., but admittedly it’s silly and I think we’d be forgiven if it didn’t. (It currently doesn’t.)
Bare $ is a mystery. You can enter it, and it matches, but it does nothing. There’s nothing in the code or comments that points to an intended use. Perhaps it was meant to be a magic “last line” substitute, or maybe it’s an artifact of “we should be able to match stuff in a scalat at some point” which ended up implemented as a completely separate case. Since bare $ has never worked, I think we’re OK dropping it.
Several other things match and the debugger tries to process them, but they’re nonsensical. l $$$$$$ is effectively l. l $fred$ matches but causes the debugger to throw an error; it should probably be rejected altogether.
There is no “I don’t like what you entered” clause. Either something weird happens, nothing happens, or there’s an error.
The problem is that complex regex right now matches far more than it ought to. We really need to establish what the acceptable forms are, and break that regex up so that it doesn’t match nonsense and cmd_l explicitly rejects nonsense. We’re only getting along because of Perl’s forgiving nature — but that means that peculiar things are accepted and (sort of) work.
Next steps
I’d suggest that we decide what the valid inputs are, and what they should do, and that the giant catchall regex should go away in favor of something much more like Larry’s original Perl 1 version.
I’m going to put a patch together in the short term that will simply not match most of the weird options, and add a “list range specification invalid” catchall else in _cmd_l_range or _cmd_l. We can decide if we want to add support for some of the more esoteric options (like l .,nnnn and so on) later; just making sure that we don’t assign nonsense to the start of the range that keeps propagating through further list commands seems like enough of a fix for now.
I flailed around for a while trying to get a good handle on how to properly start moving the old wiki functions from the script in hold/ to someplace proper in the Mojolicious tree. After a good bit of reading I finally stumbled across Ashutosh’s great intro article on assembling a model and controller. I used his registration and login controller articles to model the following:
A model for users, WebWebXNG::Model::Users. I used his basic model and changed it up a bit, adding a validated field and a field for the wiki username.
A model for settings, WebWebXNG::Model::Settings; this was a little tricky because I wanted only one possible settings record, and coding an SQLite UPSERT operation led to complicated code tracking which fields were updated and which weren’t. I got around this by using a fixed record ID and always either inserting or updating that record by simply checking if it’s there first.
A registration controller, WebWebXNG::Controller::RegistrationController, that displays the registration form on a GET and tries to use the form contents to register the user on POST. The registration code builds a wiki username in four steps, checking each time whether the name conforms to wiki syntax and trying the next step if not:
If the user supplied their own username, use it.
If they did not supply a username, try first_name . last_name without changing capitalization.
If that doesn’t work, try ucfirst(lc(first_name)) . ucfirst(lc(last_name)).
If that doesn’t work, it’s likely that the name contains non-ASCII characters, and we use Text::Unidecode to try to decode it into close-as-we-can ASCII.
A login controller, WebWebXNG::Controller::LoginController, that checks if the user’s been validated, and if so, then checks if the password is good. This most closely resembles the old code (though notably, that code did not do any user validation. Any bozo could sign up, but fortunately the NASA firewall was tight enough that the wiki wasn’t visible!)
A display controller is sketched in but not ready yet.
Templates for the login and register pages, and a start on the display page.
I added helpers (created with $self->helper) for sqlite (the database), users (a model instance that we pass the sqlite helper to), and settings (same as for users). I moved the routes back into the mainline code of WebWebXNG.pm because factoring them out seemed to just create more complexity for no real benefit.
WebWebXNG now also has some 12-factorish support in that we can specify a list of environment variables that are to be reflected into the config; the only one added so far is SQLITE_FILE, which sets the filename of the SQLite database. There’s also code in place to ensure that the config meets the minimum standards (again, SQLITE_FILE is the only hard requirement so far, but PAGEARCHIVE_DIR and LOCKDIR are probably next).
All this is now in place, but sadly under-tested. The “did we set up the database” test is in and passes, but little else. I did run the service and verify that I can display the login and register forms, but the new code added means that I can’t just throw in some data and test everything.
It was necessary to add a good bit of POD as well to get dzil test to pass, but it does, and all the code is committed and pushed up to GitHub.
Next sprint will be devoted only to getting all this under test.