Summary: all for naught, back to the original implementation, but with some guardrails
Where we last left off, I was trying to get the LDSwiftEventSource library to play nice with iOS, and it just would not. Every way I tried to convince iOS to please let this thing run failed. Even the “cancel and restart” version was a failure.
So I started looking at the option of a central server that would push the updates using notifications, and being completely honest, it seemed like an awful lot of work that I wasn’t all that interested in doing, and which would push the release date even further out.
On reflection, I seemed to remember that despite it being fragile as far as staying connected, the websocket implementation was rock-solid (when it was connected). I went back to that version (thank heavens for git!) and relaunched…yeah, it’s fine. It’s fine in the background. All right, how can I make this work?
Thinking about it for a while, I also remembered that there was a ping parameter in the connect message from Azuracast, which gave the maximum interval between messages (I’ve found in practice that this is what it means; the messages usually arrive every 15 seconds or so with a ping of 25). Since I’d already written the timer code once for force reboots of the SSE code, it seemed reasonable to leverage it like this:
- When the server connects, we get the initial ping value when we process the first message successfully.
- I double that value, and set a Timer that will call a method that just executes connect() again if it pops.
- In the message processing, as soon as I get a new message, I therefore have evidence that I’m connected, so I kill the extant timer, process the message, and then set a new one.
This loops, so each time I get a message, I tell the timer I’m fine, and then set a new one; if I ever do lose connectivity, then the timer goes off and I try reconnecting.
This still needs a couple things:
- The retries should be limited, and do an exponential backoff.
- I’m of two minds as to whether I throw up an indicator that I can’t reconnect to the metadata server. On one hand, the metadata going out of sync is something I am going to all these lengths to avoid, so if I’m absolutely forced to do without it, I should probably mention that it’s no longer in sync. On the other hand, if we’ve completely lost connectivity, the music will stop, and that’s a pretty significant signal in itself. It strikes me as unlikely that I’ll be able to stream from the server but not contact Azuracast, so for now I’ll just say nothing.
I’m running it longer-term to see how well it performs. Last night I got 4 hours without a drop on the no-timer version; I think this means that drops will be relatively infrequent, and we’ll mostly just schedule Timers and cancel them.
Lockscreen follies
I have also been trying to get the lock screen filled out so it looks nicer. Before I started, I had a generic lockscreen that had the station logo, name and slug line with a play/pause button and two empty “–:–” timestamps. I now have an empty image (boo) but have managed to set the track name and artist name and the play time. So some progress, some regress.
The lockscreen setup is peculiar: you set as many of the pieces of data that you know in a
Leave a Reply
You must be logged in to post a comment.