Summary: all for naught, back to the original implementation, but with some guardrails
Where we last left off, I was trying to get the LDSwiftEventSource library to play nice with iOS, and it just would not. Every way I tried to convince iOS to please let this thing run failed. Even the “cancel and restart” version was a failure.
So I started looking at the option of a central server that would push the updates using notifications, and being completely honest, it seemed like an awful lot of work that I wasn’t all that interested in doing, and which would push the release date even further out.
On reflection, I seemed to remember that despite it being fragile as far as staying connected, the websocket implementation was rock-solid (when it was connected). I went back to that version (thank heavens for git!) and relaunched…yeah, it’s fine. It’s fine in the background. All right, how can I make this work?
Thinking about it for a while, I also remembered that there was a ping parameter in the connect message from Azuracast, which gave the maximum interval between messages (I’ve found in practice that this is what it means; the messages usually arrive every 15 seconds or so with a ping of 25). Since I’d already written the timer code once for force reboots of the SSE code, it seemed reasonable to leverage it like this:
When the server connects, we get the initial ping value when we process the first message successfully.
I double that value, and set a Timer that will call a method that just executes connect() again if it pops.
In the message processing, as soon as I get a new message, I therefore have evidence that I’m connected, so I kill the extant timer, process the message, and then set a new one.
This loops, so each time I get a message, I tell the timer I’m fine, and then set a new one; if I ever do lose connectivity, then the timer goes off and I try reconnecting.
This still needs a couple things:
The retries should be limited, and do an exponential backoff.
I’m of two minds as to whether I throw up an indicator that I can’t reconnect to the metadata server. On one hand, the metadata going out of sync is something I am going to all these lengths to avoid, so if I’m absolutely forced to do without it, I should probably mention that it’s no longer in sync. On the other hand, if we’ve completely lost connectivity, the music will stop, and that’s a pretty significant signal in itself. It strikes me as unlikely that I’ll be able to stream from the server but not contact Azuracast, so for now I’ll just say nothing.
I’m running it longer-term to see how well it performs. Last night I got 4 hours without a drop on the no-timer version; I think this means that drops will be relatively infrequent, and we’ll mostly just schedule Timers and cancel them.
Lockscreen follies
I have also been trying to get the lock screen filled out so it looks nicer. Before I started, I had a generic lockscreen that had the station logo, name and slug line with a play/pause button and two empty “–:–” timestamps. I now have an empty image (boo) but have managed to set the track name and artist name and the play time. So some progress, some regress.
The lockscreen setup is peculiar: you set as many of the pieces of data that you know in a
TL;DR: Flutter builds are as much fun as Java and Scala ones, and you spend more time screwing with the tools than you do getting anything done. I don’t think I’m going to switch, at least not now.
As I’ve mentioned before on the blog, I maintain an iOS application for RadioSpiral’s online radio station. The app has worked well and successfully; the original codebase was Swift- Radio-Pro, which works as an iOS app and a MacOS one as well (I have been doing some infrastructure changes to support Azuracast, as previously documented on the blog.)
We do have several, very polite, Android users who inquire from time to time if I’ve ported the radio station app to Android yet, and I have had to keep saying no, as the work to duplicate the app on Android looked daunting, and nobody is paying me for this. So I’ve been putting it off, knowing that I would have to learn something that runs on Android sooner or later if I wanted to do it at all.
Randal Schwartz has been telling me for more than a year that I really should look at Dart and Flutter if I want to maintain something that works the same on both platforms, and I just didn’t have the spare time to learn it.
Come the end of May 2023, and I found myself laid off, so I really had nothing but time. And I was going to need to update the app for IOS 16 anyway at that point (the last time I recompiled it, Xcode still accepted iOS 8 as a target!) and I figured now was as good a time as any to see if I could get it working multi-platform.
I started looking around for a sample Flutter radio app, and found RadioSai. From the README, it basically does what I want, but has a bunch of other features that I don’t. I figured an app I could strip down was at least a reasonable place to start, so I checked it out of Github and started to work.
Gearing up
Setting up the infrastructure Installing Dart and Flutter was pretty easy: good old Homebrew let me brew install flutter to get those in place, and per instructions, I ran flutter doctor to check my installation. It let me know that I was missing the Android toolchain (no surprise there, since I hadn’t installed anything there yet). I downloaded the current Android Studio (Flamingo in my case), opened the .dmg, and copied it into /Applications as directed.
Rerunning flutter doctor, it now told me that I didn’t have the most recent version of the command-line tools. I then fell into a bit of a rabbit hole. Some quick Googling told me that the command line tools should live inside Android Studio. I ferreted around in the application bundle and they were just Not There. I went back to the Android Studio site and downloaded them, and spent a fair amount of time trying to get sdkmanager into my PATH correctly. When I finally did, it cheerfully informed me that I had no Java SDK. So off to the OpenJDK site, and download JDK 20. (I tried a direct install via brew install, but strangely Java was still /usr/bin/java, and I decided rather than tracking down where the Homebrew Java went, I’d install my own where l could keep an eye on it.
I downloaded the bin.tar.gz file and followed the installation instructions, adding the specified path to my PATH… and still didn’t have a working Java. Hm. Looking in the OpenJDK directory, the path was Contents, not jdk-18.0.1.jdk/Contents. I created the jdk-18.0.1 directory, moved Contents into it and had a working Java! Hurray! But even with dorking around further with the PATH, I still couldn’t get sdkmanager to update the command-line tools properly.
Not that way, this way
A little more Googling turned up this Stack Overflow post that told me to forget about installing the command-line tools myself, and to get Android Studio to do it. Following those instructions and checking all the right boxes, flutter doctor told me I had the command-line tools, but that I needed to accept some licenses. I ran the command to do that, and finally I had a working Flutter install!
Almost.
When I launched Android Studio and loaded my project, it failed with flutter.sdk not defined. This turned out to mean that I needed to add
(the location that Homebrew had used to unpack Flutter — thank you find) to local.properties. After that, Gradle twiddled its fingers a while, and declared that the app was ready. (It did want to upgrade the build, and I let it do that.)
Build, and…
The option 'android.enableR8' is deprecated.
It was removed in version 7.0 of the
Android Gradle plugin.
Please remove it from 'gradle.properties".
Okay, I remove it.
/Users/joemcmahon/Code/radiosai/.dart_tool/ does not exist.
More Googling, Stack Overflow says Run Tools > Flutter > Pub Get. Doesn’t exist. Okaaaaaay.
There’s a command line version:
flutter clean; flutter pub get
Deleted dart_tool, then recreated it with package_config.json there. Right!
Back to Android Studio, still confused about the missing menu entry, and build again. Gradle runs, downloads a ton of POMs and
Couldn't resolve the package 'radiosai' in 'package:radiosai/audio_service/service_locator.dart'.
Looking one level up, in :app:compileFlutterBuildDebug, Invalid depfile: /Users/joemcmahon/ Code/radiosai/.dart_tool/flutter_build/bff84666834b820d28a58a702f2c8321/ kernel_snapshot.d.
Let’s delete those and see if that helps…yes, but still can’t resolve radiosai. Okay, time for a break.
Finally, a build!
Another Google: I wasn’t able to resolve the package because I needed to pub get again.
Module was compiled with an incompatible version of Kotlin.
The binary version of its metadata is 1.8.0, expected version is 1.6.0. Another Google. One of the build Gradle files is specifying Kotlin 1.6…it’s in /android/ build.gradle. Update that to 1.8.10, build…Kotlin plugin is being loaded, good. Couple warnings, still going, good.
BUILD SUCCESSFUL
Nice! Now, how do I test this thing? Well, there’s Device Manager over on the right, that looks promising. There’s a “Pixel 3a” entry and a “run” button. What’s the worst that could happen?
Starts up, I have a “running device” that’s a couple inches tall, on its home screen. Hm. Ah, float AND zoom. Cool. Now I realize I have no idea how to run an Android phone, and I don’t see the app.
https://developer.android.com/studio/run/emulator…nope. Beginning to remember why I didn’t like working in Scala… Gradle upgrade recommended, okay, and now
Namespace not specified. Please specify a namespace in the module's build.gradle.
If you are using Capacitor 4, do not upgrade to Gradle 8.
Yeah, I remember why I stopped liking Scala. git reset to put everything back…
Execution failed for task:gallery_saver:compileDebugKotlin'.
> compileDebugJavaWithJavac task (current target is 1.8) and 'compileDebugKotlin' task
(current target is 17)
jvm target compatibility should be set to the same Java version.
Consider using JVM toolchain: https://kotl.in/gradle/jvm/toolchain
Fix android/app/build.gradle so everyone thinks we’re using Java 17, which uses a different syntax, ugh.
Fix it again. Same for the Kotlin target too.
'compileDebugJavaWithJavac' task (current target is 1.8) and 'compileDebugKotlin' task (current target is 17) jvm target compatibility should be set to the same Java version.
This is apparently actually Gradle 8 still lying around after the (incorrectly) recommended upgrade. Removing ~/ gradle to nuke from orbit. Also killing android/.gradle.
[Aside: I am used to using git grep to find things, and it is just not finding them in this repo!]
Cannot read the array length because "" is null
WHAT.
Apparently this means that Gradle 8 is still lurking. Yep, the rm ~/.gradle/* didn’t remove everything because of permissions. Yougoddabefuckingkiddingme. Sudo’ed it, relaunched with the fixes I made above. App runs!
However it stops working after a bit with no reason indicating why. Let’s stop it and restart. Stop button did not stop it; had to quit Android Studio.
Well. Okay. This is not promising, but let’s see the benefit of using Flutter and check out if the iOS side works. Seems a lot more straightforward, though I’m not doing much in Xcode. cd iOS, launch the simulator (important!), flutter run…and we get the Flutter demo project. Looks like the IOS version wasn’t brought over from the Android side. Why did you even do this.
Do we all remember that I wanted something that worked on both platforms? Gah.
So I’m putting Flutter aside, cleaning up the ton of disk space all this extra infrastructure took up, and will maybe come back to it another time.
But for right now, the amount of work involved is absolutely not worth it, and I’d have to write the damn thing from scratch anyway.
Maybe I’ll run this through one of the LLMs and see if it can get me a common codebase as a starting point, but I am not sanguine.
I’ve been working on getting the RadioSpiral infrastructure back up to snuff after our Azuracast streaming server upgrade. We really, really did need to do that — it just provides 90% of everything we need to run the station easily. Not having to regenerate the playlists every few weeks is definitely a win, and we’re now able to do stuff like “long-play Sunday”, where all of the tracks are long-players of a half-hour or more.
But there were some hitches, mostly in my stuff: the iOS app and the now-playing Discord bot. Because of reasons (read: I’m not sure why), the Icecast metadata isn’t available from the streaming server on Azuracast, especially when you’re using TLS. This breaks the display of artist and track on the iOS app, and partially breaks the icecast-monitor Node library I was using to do the now-playing bot in Discord.
(Side note: this was all my bright idea, and I should have tested the app and bot against Azuracast before I proposed cutting over in production, but I didn’t. I’ll run any new thing in Docker first and test it better next time.)
Azuracast to the rescue
Fortunately, Azuracast provides excellent now-playing APIs. There a straight-up GET endpoint that returns the data, and two event-driven ones (websockets and SSE). The GET option depends on you polling the server for updates, and I didn’t like that on principle; the server is quite powerful, but I don’t want multiple copies of the app hammering it frequently to get updates, and it was inherently not going to be close to a real-time update unless I really did hammer the server.
So that was off the table, leaving websockets and SSE, neither of which I had ever used. Woo, learning experience. I initially tried SSE in Node and didn’t have a lot of success with it, so I decided to go with websockets and see how that went.
Pretty well actually! I was able to get a websocket client running pretty easily, so I decided to try it that way. After some conferring with ChatGPT, I put together a library that would let me start up a websocket client and run happily, waiting for updates to come in and updating the UI as I went. (I’ll talk about the adventures of parsing Azuracast metadata JSON in another post.)
I chose to use a technique that I found in the FRadioPlayer source code, of declaring a public static variable containing an instance of the class; this let me do
(Kingfisher is fantastic! Coupled with Azuracast automatically extracting the artwork from tracks and providing a URL to it, showing the right covers was trivial. FRadioPlayer uses the Apple Music cover art API to get covers, and given the, shall we say, obscure artists we play, some of the cover guesses it made were pretty funny.)
Right. So we have metadata! Fantastic. Unfortunately, the websocket client uses URLSessionWebSocketTask to manage the connection, and that class has extremely poor error handling. It’s next to impossible to detect that you’ve lost the connection or re-establish it. So It would work for a while, and then a disconnect would happen, and the metadata would stop updating.
Back to the drawing board. Maybe SSE will work better in Swift? I’ve written one client, maybe I can leverage the code. And yes, I could. After some searching on GitHub and trying a couple of different things, I created a new library that could do Azuracast SSE. (Thank you to LaunchDarkly and LDSwiftEventSource for making the basic implementation dead easy.)
So close, but so far
Unfortunately, I now hit iOS architecture issues.
iOS really, really does not want you to run long-term background tasks, especially with the screen locked. When the screen was unlocked, the metadata updates went okay, but as soon as the screen locked, iOS started a 30-second “and what do you think you’re doing” timer, and killed the metadata monitor process.
I tried a number of gyrations to keep it running and schedule and reschedule a background thread, but if I let it run continuously, even with all the “please just let this run, I swear I know what I need here” code, iOS would axe it within a minute or so.
So I’ve fallen back to a solution not a lot better than polling the endpoint: when the audio starts, I start up the SSE client, and then shut it down in 3 seconds, wait 15 seconds, and then run it again. When audio stops, I shut it off and leave it off. This has so far kept iOS from nuking the app, but again, I’m polling.Yuck.
However, we now do have metadata, and that’s better than none.
On the other hand…
On the Discord front, however, I was much more successful. I tried SSE in Node, and found the libraries wanting, so I switched over to Python and was able to use sseclient to do the heavy lifting for the SSE connection. It essentially takes an SSE URL, hooks up to the server, and then calls a callback whenever an event arrives. That was straightforward enough, and I boned up on my Python for traversing arbitrary structures — json.loads() did a nice job for me of turning the complicated JSON into nested Python data structures.
The only hard bit was persuading Python to turn the JSON struct I needed to send into a proper query parameter. Eventually this worked:
I pretty quickly got the events arriving and parsed, and I was able to dump out the metadata in a print. Fab! I must almost be done!
But no. I did have to learn yet another new thing: nonlocal in Python.
Once I’d gotten the event and parsed it and stashed the data in an object, I needed to be able to do something with it, and the easiest way to do that was set up another callback mechanism. That looked something like this:
The send_embed_with_image callback puts together a Discord embed (a fancy message) and posts it to our Discord via a webhook, so I don’t have to write any async code. The SSE client updates every fifteen seconds or so, but I don’t want to just spam the channel with the updates; I want to compare the new update to the last one, and not post if the track hasn’t changed.
I added a method to the metadata object to compare two objects:
def __eq__(self, other) -> bool:
if not isinstance(other, NowPlayingResponse):
return False
if other is None:
return False
return (self.dj == other.dj and
self.artist == other.artist and
self.track == other.track and
self.album == other.album)
…but I ran into a difficulty trying to store the old object: the async callback from my sseclient callback couldn’t see the variables in the main script. I knew I’d need a closure to put them in the function’s scope, and I was able to write that fairly easily after a little poking about, but even with them there, the inner function I was returning still couldn’t see the closed-over variables.
The fix was something I’d never heard of before in Python: nonlocal.
Normally, all I’d need to do would be have startup and last_response in the outer function’s argument list to have them visible to the inner function’s namespace, but I didn’t want them to just be visible: I wanted them to be mutable. Adding the nonlocal declaration of those variables does that. (If you want to learn more about nonlocal, this is a good tutorial.)
The Discord monitor main code now looks like this:
startup = True
last_response = None
# Build the SSE client
client = build_sse_client(server, shortcode)
# Create the sender function and start listening
send_embed_with_image = wrapper(startup, last_response)
run(client, send_embed_with_image)
Now send_embed_with_image will successfully be able to check for changes and only send a new embed when there is one.
One last notable thing here: Discord sets the timestamp of the embed relative to the timezone of the Discord user. If a timezone is supplied, then Discord does the necessary computations to figure out what the local time is for the supplied timestamp. If no zone info is there, then it assumes UTC, which can lead to funny-looking timesstamps. This code finds the timezone where the monitor code is running, and sets the timestamp to that.
from tzlocal import get_localzone
local_tz = get_localzone()
start = response.start.replace(tzinfo=local_tz)
And now we get nice-looking now-playing info in Discord:
Building on this
Now that we have a working Python monitor, we can now come up with a better solution to (close to) real-time updates for the iOS app.
Instead of running the monitor itself, the app will register with the Python monitor for silent push updates. This lets us offload the CPU (and battery) intensive operations to the Python code, and only do something when the notification is pushed to the app.
But that’s code for next week; this week I need to get the iOS stopgap app out, and get the Python server dockerized.
I’m in the process of (somewhat belatedly) upgrading the RadioSpiral app to work properly with Azuracast.
The Apple-recommended way of accessing the stream metadata just does not work with Azuracast’s Icecast server – the stream works fine, but the metadata never updates, so the app streams the music but never updates the UI with anything.
Because it could still stream (heh, StillStream) the music, we decided to go ahead and deploy. There were so many other things that Azuracast fixed for us that there was no question that decreasing the toil for everyone (especially our admin!) was going to make a huge difference.
Addressing the problem
Azuracast supplies an excellent now-playing API in four different flavors:
A file on the server that has now-playing data, accessible by simply getting the contents of the URL. This is only updated every 30 seconds or so, which isn’t really good enough resolution, and requires the endpoint be polled.
An API that returns the now-playing data as of the time of the request via a plain old GET to the endpoint. This is better but still requires polling to stay up to date, and will still not necessarily catch a track change unless the app polls aggressively, which doesn’t scale well.
Real-time push updates, either via SSE over https or websocket connection. The push updates are less load on the server, as we don’t have to go through session establishment every time; we can just use the open connection and write to it. Bonus, the pushes can happen at the time the events occur on the server, so updates are sent exactly when the track change occurs.
I decided that the websocket API was a little easier to implement. With a little help from ChatGPT to get me an initial chunk of code (and a fair amount of struggling to figure out the proper parameters to send for the connection request),
I used a super low-rent SwiftUI app to wrap AVAudioSession and start up a websocket client separately to manage the metadata; that basically worked and let me verify that the code to monitor the websocket was working.
I was able to copy that code inside of FRadioPlayer, the engine that RadioSpiral uses to do the streaming, but then I started running into complications.
Xcode, Xcode, whatcha gonna do?
I didn’t want to create an incompatible fork of FRadioPlayer, and I felt that the code was special-purpose enough that it wasn’t a reasonable PR to make. In addition, it was the holidays, and I didn’t want to force folks to have to work just because I was.
So I decided to go a step further and create a whole new version of the FRadioPlayer library, ACRadioPlayer, that would be specifically designed to be used only with Azuracast stations.
Initially, this went pretty well. The rename took a little extra effort to get all the FRadio references switched over to ACRadio ones, but it was fairly easy to get to a version of the library that worked just like FRadioPlayer, but renamed.
Then my troubles began
I decided that I was going to just include the code directly in ACRadioPlayer and then switch RadioSpiral to the new engine, so I did that, and then started trying to integrate the new code into ACRadioPlayer. Xcode started getting weird. I kept trying to go forward a bit at a time — add the library, start trying to include it into the app, get the fetch working…and every time, I’d get to a certain point (one sample app working, or two) and then I’d start getting strange errors: the class definition I had right there would no longer be found. The build process suddenly couldn’t write to the DerivedData directory anymore. I’d git reset back one commit, another, until I’d undone everything. Sometimes that didn’t work, and I had to throw away the checkout and start over. The capper was “Unexpected error”, with absolutely nothing to go on to fix it.
Backing off and trying a different path
So I backed all the way out, and started trying to build up step-by-step. I decided to try building the streaming part of the code as a separate library to be integrated with ACRadioPlayer, so I created a new project, ACWebSocketClient, and pulled the code in. I could easily get that to build, no surprise, it had been building, and I could get the tests of the JSON parse to pass, but when I tried to integrate it into ACRadioPlayer using Swift Package Manager, I was back to the weird errors again. I tried for most of a day to sort that out, and had zero success.
The next day, I decided that maybe I should follow Fatih’s example for FRadioPlayer and use Cocoapods to handle it. This went much better.
Because of the way Cocoapods is put together, just building the project skeleton actually gave me some place to put a test app, which was much better, and gave me a stepping stone along the way to building out the library. I added the code, and the process of building the demo showed me that I needed to do a few things: be more explicit about what was public and what was private, and be a little more thoughtful about the public class names.
A couple hours work got me a working demo app that could connect to the Azuracast test station and monitor the metadata in real time. I elected to just show the URL for the artwork as text because actually fetching the image wasn’t a key part of the API.
I did then hit the problem that the demo app was iOS only. I could run it on MacOS in emulation mode, but I didn’t have a fully-fledged Mac app to test with. (Nor did I have a tvOS one.) I tried a couple variations on adding a new target to build the Mac app, but mostly I ended up breaking the work I had working, so I eventually abandoned that.
I then started working step by step to include the library in ACRadioPlayer. FRadioPlayer came with an iOS apps (UIKit and SwiftUI), a native Mac app, and a tvOS app. I carefully worked through getting the required versions of the OS to match in the ACWebSocketClient podspec, the ACRadioPlayer Podfile, and the ACRadioPlayer Xcode project. That was tedious but eventually successful.
Current status
I’ve now got the code properly pulled in, compatible with the apps, and visible to each of the apps. I’ll now need to pull in the actual code that uses it from the broken repo (the code was fine, it was just the support structures around it that weren’t) and get all the apps working. At that point I can get both of the libraries out on Cocoapods, and then start integrating with RadioSpiral.
In general, this has been similar to a lot of projects I’ve worked on in languages complex enough to need an IDE (Java, Scala, and now Swift): the infrastructure involved in just getting the code to build was far more trouble to work with and maintain, and consumed far more time, than writing the code itself.
Writing code in Perl or Python was perhaps less flashy, but it was a lot simpler: you wrote the code, and ran it, and it ran or it didn’t, and if it didn’t, you ran it under the debugger (or used the tests, or worse case, added print statements) and fixed it. You didn’t have to worry about whether the package management system was working, or if something in the mysterious infrastructure underlying the applications was misconfigured or broken. Either you’d installed it, and told your code to include it, or you hadn’t. Even Go was a bit of a problem in this way; you had to be very careful in how you got all the code in place and that you had gotten it in place.
Overall, though, I”m pretty happy with Cocoapods and the support it has built in. Because FRadioPlayer was built using Cocoapods as its package management, I’m hoping that the process of integrating it into RadioSpiral won’t be too tough.
A little context: I’m updating the RadioSpiral app to use the (very nice) Radio Station Pro API that gives me access to useful stuff like the station calendar, the current show, etc. Like any modern API, it returns its data in JSON, so to use this in Swift, I need to write the appropriate Codable structs for it — this essentially means that the datatypes are datatypes that Swift either can natively decode, or that they’re Codable structs.
I spent some time trying to get the structs right (the API delivers something that makes this rough, see below), and after a few tries that weren’t working, I said, “this is dumb, stupid rote work – obviously a job for ChatGPT.”
So I told it “I have some JSON, and I need the Codable Swift structs to parse it.” The first pass was pretty good; it gave me the structs it thought were right and some code to parse with – and it didn’t work. The structs looked like they matched: the fields were all there, and the types were right, but the parse just failed.
keyNotFound(CodingKeys(stringValue: "currentShow", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "broadcast", intValue: nil)], debugDescription: "No value associated with key CodingKeys(stringValue: \"currentShow\", intValue: nil) (\"currentShow\").", underlyingError: nil))
Just so you can be on the same page, here’s how that JSON looks, at least the start of it:
I finally figured out that Swift, unlike Go, must have field names that exactly match the keys in the incoming JSON. So if the JSON looks like {broadcast: {current_show... then the struct modeling the contents of the broadcast field had better have a field named current_show, exactly matching the JSON. (Go’s JSON parser uses annotations to map the fields to struct names, so having a field named currentShow is fine, as long as the annotation says its value comes from current_show. That would look something like this:
type Broadcast struct {
currentShow CurrentShow `json:currentShow`
...
}
type CurrentShow struct {
...
There’s no ambiguity or translation needed, because the code explicitly tells you what field in the struct maps to what field in the JSON. (I suppose you could completely rename everything to arbitrary unrelated names in a Go JSON parse, but from a software engineering POV, that’s just asking for trouble.)
Fascinatingly, ChatGPT sort of knows what’s wrong, but it can’t use that information to fix the mistake! “I apologize for the oversight. It seems that the actual key in your JSON is “current_show” instead of “currentShow”. Let me provide you with the corrected Swift code:”. It then provides the exact same wrong code again!
struct Broadcast: Codable {
let currentShow: BroadcastShow
let nextShow: BroadcastShow
let currentPlaylist: Bool
let nowPlaying: NowPlaying
let instance: Int
}
The right code is
struct Broadcast: Codable {
let current_show: BroadcastShow // exact match to the field name
let next_show: BroadcastShow. // and so on...
let current_playlist: Bool
let now_playing: NowPlaying
let instance: Int
}
When I went through manually and changed all the camel-case names to snake-case, it parsed just fine. (I suppose I could have just asked ChatGPT to make that correction, but after it gets something wrong that it “should” get right, I tend to make the changes myself to be sure I understood it better than the LLM.)
Yet another illustration that ChatGPT really does not know anything. It’s just spitting out the most likely-looking answer, and a lot of the time it’s close enough. This time it wasn’t.
On the rough stuff from the API: some fields are either boolean false (“nothing here”) or a struct. Because Swift is a strongly-typed language, this has to be dealt with via an enum and more complex parsing. At the moment, I can get away with failing the parse and using a default value if this happens, but longer-term, the parsing code should use enums for this. If there are multiple fields that do this it may end up being a bit of a combinatorial explosion to try to handle all the cases, but I’ll burn that bridge when I come to it.
I came back to the Radiospiral iOS app after some time away (we’re trying to dope out what’s going on with metadata from various broadcast setups appearing in the wrong positions on the “now playing” screen, and we need a new beta with the test streams enabled to try things), only to discover that Fastlane had gotten broken in a very unintuituve manner. Whenever I tried to use it, it took a crack at building things, then told me I needed to update the snapshotting Swift file.
Okay, so I do that, and the error persists. Tried a half-dozen suggestions from Stack Overflow. Error persists. I realized I was going to need to do some major surgery and eliminate all the variables if I was going to be able to make this work.
What finally fixed it was cleaning up multiple Ruby installs and getting down to just one known location, and then using Bundler to manage the Fastlane dependencies. The actual steps were:
removing rvm
removing rbenv
brew install ruby to get one known Ruby install
making the Homebrew Ruby my default ( export PATH=/usr/local/Cellar/ruby/2.7.0/bin:$PATH)
rm -rf fastlane to clear out any assumptions
rm Gemfile* to clean up any assumptions by the current, broken Fastlane
bundle install fastlane (notgem install!) to get a clean one and limit the install to just my project
bundle exec fastlane init to get things set up again
After all that, fastlane was back to working, albeit only via bundle exec, which in hindsight is actually smarter.
The actual amount of time spent trying to fix it before giving up and removing every Ruby in existence was ~2 hours, so take my advice and make sure you are absolutely sure which Ruby you are running, and don’t install fastlane into your Ruby install; use bundler. Trying to fix it with things going who knows where…well, there’s always an applicable xkcd.
Allow me to be the Nth person to complain about App Store Connect’s lack of transparency, I’m currently working on an app for radiospiral.net’s net radio station, and I’m doing my proper dilligence by getting it beta tested by internal testers before pushing it to the App Store. I’m using TestFlight to keep it as simple as possible (and because fastlane seems to work well with that setup).
I managed to get two testers in play, but I was trying to add a third today and I could not get the third person to show up as an internal tester because I kept missing a step. Here’s how it went, with my mental model in brackets:
Go to the users and groups page and add the new user. [okay, the new user’s available now].
Add them to the same groups as the other tester who I got working. [right, all set up the same…]
Added the app explicitly to the tester. […and they’ve got the app]
Mail went out to the new tester. [cool, the site thinks they should be a tester] [WRONG]
Tester installs Testflight and taps the link on their device. Nothing appreciable happens. [Did I set them up wrong?]
Delete the user, add them again. [I’ll set them up again and double-check…yes, they match]
They tap again. Still nothing. [what? but…]
Go over to the Testflight tab and look at the list of testers. Still not there. [I added them. why are they not there?] [also wrong]
Much Googling and poking about got me nothing at all. Why is the user I added as an internal tester not there? They should be in the list.
I went back to the page and this time I saw the little blue plus in a circle. I have to add them here too! Clicked the +, and the new user was there, waiting to be added to the internal testers.
Sigh.
So now I have blogged this so I can remember the process, and hopefully someone else who’s flailing around trying to figure out why internal testers aren’t showing up on the testers list will find this.
Pokemon GO players on iOS: the new release today (7/12/16, in the App Store now) reduces the information it wants from your Google account from “full access” to your email and “know who you are on Google”. If you were already signed up, do this:
Download the updated app, wait for it to reinstall
Kill the app; if you don’t know how to do this, just power your phone off and back on again
Launch Pokemon GO; it’ll fail to get access to your account. THIS IS OK.
Tap “try another account”
Log back in with your Google username and password.
This time it should ask for only “know your email” and “know who you are”.
At the time I write this, it looks like many people are doing this, as the Pokemon GO servers are rendering the server overload screen:
For the paranoid: It sounds like the iOS programmers just screwed up and released without reducing the account permissions request; this is not a nefarious scheme to steal all your email and Google+ naked selfies. From Niantic (via Kotaku):
We recently discovered that the Pokémon GO account creation process on iOS erroneously requests full access permission for the user’s Google account. However, Pokémon GO only accesses basic Google profile information (specifically, your User ID and email address) and no other Google account information is or has been accessed or collected. [Emphasis mine – JM] Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google profile information, in line with the data that we actually access. Google has verified that no other information has been received or accessed by Pokémon GO or Niantic. Google will soon reduce Pokémon GO’s permission to only the basic profile data that Pokémon GO needs, and users do not need to take any actions themselves.
So, like every other iPhone user, I was *very* curious about iOS 7. As a developer, even more so. (Particularly, was I going to have to scramble to get my app working again under iOS 7?)
So I took my backup and installed it. First impression is that it feels ever so much lighter, psychologically, than iOS 6. The “flattening” of the interface greatly enhances the experience; Microsoft was right on the money with that one. My experiences with Windows 8 only make me wish they could have committed even harder to it and gotten rid of the desktop altogether – but I digress.
Some bugs, as expected, and I’ll be filing radars about them. In general, working pretty well, but there are a few showstoppers for me in this beta related to my day job. If it were not for those, I’d stick with it. Even with the crashes and hiccups, it’s that much of an improvement.
My app does continue to work, and I’ve now, I think, spotted the problem that’s causing it to drop and resume streaming, so that was a benefit.
Today I DFU my phone and return it to iOS 6 so I have a dependable device again, but it’s definitely a wrench. I’d much rather stay in the brighter, smoother, lighter world of iOS 7.