Category: iOS

  • 18.5 and .longFormAudio

    A few months back, I put together a start at a new version of RadioSpiral, coded from the ground up to make it more lightweight and easy to use and work on: no FRadioPlayer, no Swift-Radio-Pro, just starting with the very basics to put together a SwiftUI version of the app.

    I had one station working fine, with more to do to get it to a full replacement, but I put it aside to work on other stuff.

    Pulled it back out today to run it while trying to track down what was causing odd-looking metadata from one of our streamers, built it to run on the simulator…and got a runtime error.

    Huh. Didn’t have that before.

    A little poking around and a skull session with ChatGPT about the logs, and it became clear that iOS 18.5 had tightened up the requirements for the AvAudiosession.setCategory call.

    Before iOS 18.5, setCategory was pretty loose about what options values were allowable. I need .longFormAudio to prevent iOS from terminating my app if it goes into the background for a long time, but my old options setting ([.allowBluetooth, .allowAirPlay]) was no longer valid.

    I had the choice of keeping the options and switching to .default, or sticking with .longFormAudio and dropping the options. I decided to drop them; not having them doesn’t prevent the user from changing the routing in Control Center, and with .default, my app just honors that. Since that’s what I want, I deleted the options. If you’re doing something similar in your app, here are the rules:

    AVAudioSession .longFormAudio Compatibility

    Routing PolicyAllowed Category OptionsBehavior Summary
    .default (standard)All options (.allowBluetooth, .allowAirPlay, .mixWithOthers, .duckOthers, etc.)Full programmatic routing control—Bluetooth, AirPlay, etc. can be enabled via code. Prone to getting terminated.
    .longFormAudioOnly default optionsno explicit CategoryOptions allowed ( see Apple Developer docs)System assumes “long-form” (radio/podcast) playback; user must route to devices manually (e.g., via Control Center). Audio will be permitted to play in the background for long periods.

    You can still add .mixWithOthers, .duckOthers, and .interruptSpokenAudioAndMixWithOthers with .longFormAudio, and those are the ones that matter.

    For a radio app, you need the following:

    setCategory(
        _ category: .playback,
        mode: AVAudioSession.Mode,
        policy: AVAudioSession.RouteSharingPolicy,
        options: AVAudioSession.CategoryOptions = []
    ) 
    • category: .playback implies that playing audio is central to the app. The silence switch is ignored, and audio continues in the background.
    • mode: .default is the best choice for the radio, as it works with every category, but I might try .spokenAudio, to briefly pause the audio when another app plays a short audio prompt. I think this is the mode that Overcast uses for its interruptions, where it backs up the audio just a little if another audio prompt interrupts it.
    • policy: .longFormAudio fits best here, routing to the user-selected destination for long-form audio.
    • options: for now I’m not specifying any options, as none of them seem appropriate. I might try .mixWithOthers (or make that switchable on and off); right now the “no options” version takes over all audio. Other apps like Maps use .duckOthers or .mixWithOthers to interrupt the stream; I might be able to use one of these to do the same trick of “back up a bit and resume” that Overcast does, but I think I’ll stick with the default for now.

    `So my final call is now

    session.setCategory(.playback,
                        mode: .default,
                        policy: .longFormAudio,
                        options: [])

    Posting this for reference for anyone who hits “why does my code break under 18.5?”.

  • Cursor, said the Cursor, Cursor, said the Cursor, Tension, apprehension, and dissension have begun!

    (If you don’t know the quote I’m parodying, hie thee to a bookstore and get a copy of Alfred Bester’s The Demolished Man forthwith.)

    This week’s adventure in LLM starred Cursor and a delayed refactoring I needed to do in the RadioSpiral iOS app. We are considering adding more streams in the future; our library right now ranges from the really avante-garde to hours-long drones, and those don’t really belong cheek by jowl on a single stream.

    Floating along on Thom Brennan and suddenly taking a screeching turn into Slaw will jar anybody, and we want people to enjoy the stream instead of being hauled out of a pleasant dream, fumbling to Shut That Terrible Noise Off.

    This means that we’ll need to do some work to set up and support these multiple streams. We’ve got a jump on that; Azuracast can easily support multiple stations, and the iOS app already has infrastructure built in to do it too…except that it only works for multiple streams with Shoutcast or Icecast, not Azuracast, and it definitely isn’t ready to handle switching the Azuracast websocket metadata monitor around for different stations.

    It seemed like a good idea at the time

    The original version of the app (Swift Radio Pro) handled it okay for Icecast and Shoutcast, because the stream monitoring was built right in to the radio engine (FRadioPlayer). (Great work by all concerned on each of these projects, by the way!)

    However, when we switched over to Azuracast, we started to have problems using the methods that were built in to FRadioPlayer to monitor the metadata. They just did not work.

    Unfortunately we’d already committed to the infrastructure change because our old broadcast software was a pain to maintain and required things like “regenerate the automatic playlist every couple months, or the station will go dead and just sit there until someone notices”. Which was easy to forget to do, and did not contribute to a professional impression.

    Nobody wanted to go back, least of all me since it was my bright idea, meaning that I needed to find a way fast to get the app showing metadata again.

    I did not manage fast. It was a couple months before I freed up enough time to really dive in and fix it (see multiple postings here earlier), and I invoked a quick fix that put the new metadata management in the NowPlayingViewController. It wasn’t bad, in that it (eventually) worked and we got our metadata, but when I started looking at switching stations, it started to get really messy.

    The code, in hindsight, belonged in the Station management code, and not in the NowPlaying code, which should have only been showing it and nothing else.

    I took a couple cracks at it myself, but it ended up being a complex process, and I decided this last Friday that I should see if I could get an LLM assistant to get me through this. I installed Cursor, fired it up, and started in.

    [Aside]

    I thought I should see if I could get the conversation back that I’d had with Cursor to make it easier to write this next section. I had ended the conversation at one point and started a new one, so I couldn’t easily scroll back and see what we’d both said. I thought, the UI can pull up conversations, so it’s gotta be there, right?

    Not right. I spent a couple hours with Cursor trying to reconstruct the conversation. Best we could do after a couple of hours of “no, be dumber, don’t try to filter, just get everything you can” was my side of the conversation and none of Cursor’s. Lesson learned, don’t close it out if you might want to reread it…

    [back to our regularly scheduled blog post]

    I started off by describing the change I wanted to make, and the bugs I was trying to fix: the metadata extraction, where it was, where I wanted it, and asked if we could move the code so it appeared to be using the FRadioPlayer metadata callbacks, to minimize the difference in the code. (I had forgotten that I’d switched to my own callback mechanism, which would bite me.)

    Cursor built a new class, StationsMetadataManager, but didn’t add it to the project. I did that by hand myself, and decided that the better part of valor here was to let Cursor make code changes and I would manage Xcode. We faffed around a bit getting types made public, and the code built — but no metadata. I remembered after a minute that we should be using my callbacks, and asked to move those too. (If Cursor had been a human assistant I think I would have gotten a stink-eye at that point.)

    We fiddled a bit more, and the callbacks started working again, and we were getting the updates as expected. Despite my asking Cursor to look carefully at the metadata fetch code, it didn’t realize that I had switchable trace messages and wanted to put in its own. I had it look specifically for the if debuglevel && lines, which were the traces I’d added. It remembered this time, but forgot later.

    Elapsed time: probably an hour (the recovered logs don’t have timestamps, so I can’t be certain.)

    I ran on-device for a while, and noticed that the Xcode resource graphs were showing that I was using more and more resources each time I switched stations. Cursor suggested that we might be leaking timers, and that it could add debug and watch for the timers being invalidated. I countered that I knew that LLMs sometimes had trouble counting, and perhaps we should keep a timer count instead to check for leaks.

    We did that, and it wasn’t timers; Cursor suggested maybe we were leaking metadata callbacks. A quick try with parallel tracking of resources, and yep, that was it. Cursor originally tried to be clever and have the NowPlayingViewController‘s deinit() clean up, but I pointed out that there was only one, and it never went out of scope, so it created a method to nuke all the active callbacks that a station switch would call.The resource hogging dropped back to better but not perfect levels (a streaming audio app is going to use “unreasonable” amounts of resource).

    We were still using a lot of energy, and I had an idea. The metadata server sends back a “expect the next update in N seconds” value; I proposed adding 5 seconds to that and making that the timer pop value. This would mean that most of the time we’d never pop at all — the timer would get cancelled because the metadata arrived on time — and if it didn’t, we’d get the “woops, the metadata didn’t arrive” on a reasonable schedule. We made that change and the energy use dropped again a bit more to “high, but acceptable”.

    Cursor also proposed that maybe we were updating the screen too often, and the redraws might be using energy we didn’t need to. We put together a streamlined version of == for the Equatable implementation on the stream status object to see if that would help.

    That seemed to be working until I switched to the test station that plays very short tracks (from Slaw’s Snakes and Ladders; recommended if you like your music strange and short). The metadata stopped updating for a bit, and while looking at the log I noted that we had two callbacks when I expected only one. Cursor reminded me that we also had to update the lockscreen (which I had forgotten, thanks for covering me on that!). The short tracks had ended and a longer track was playing so I tested the lockscreen. The album cover was a blank square and the metadata was the station name and description, so the metadata wasn’t getting processed.

    We looked at that for a bit, and verified that the callback wasn’t set up right. After a bit of back-and-forth, we got it, and I had full metadata on the lock screen. Looking at the log, I still saw a couple of places where the UI was getting updated with identical data, so I asked for a copy of “what we just set” so we’d be able to skip that update if it wasn’t needed.

    At this point the refactor was complete, and I had a little more than I had planned on!

    I think this was another two hours at this point.

    But, still more to do

    I figured I was on a roll. I had a constraint warning in this code since I forked it from the original, and I decided to go find it and fix it. I had tried myself before, but it turned out I was looking at the wrong screen!

    The stations screen was the one throwing the error. I tried Cursor’s recommended fix, and to quote myself “it looks like ass”. The change made the rows way too narrow and nothing lined up anymore. We reverted that, and I suggested we remove the hard constraint, and fix up the row height in the code. We tried a couple iterations of that, but it wasn’t really working well. I gave Cursor an ultimatum that if we didn’t fix it in ten minutes I was going to revert the branch; it was working okay before, it was just throwing the warning.

    I described what the visual result was and asked, is it just that the description field isn’t line-breaking? Cursor figured out that yes, that was it, widened the row a little more, and it looked good.

    We committed, I (eventually) squashed the branch, and rebased main.

    We did a little looking at branches, and eventually I decided that it was easier for me to do it by hand than talk Cursor through it. It turned out that I’d just let a lot of junk branches build up and they all needed to go. Cleaned up, the work branch rebased onto master and removed. A good day’s work.

    Another 45 minutes or so.

    Next day, a few more issues

    I played the app overnight (our station is good sleep music) and noticed that the lockscreen stopped updating after the current track finished; it never loaded the metadata for the next track.

    I suspected that the code I’d added to save battery when the app was backgrounded was the culprit, and we looked at it; turned out that I was unconditionally stopping the metadata client whenever we went into the background. We changed that to “stop it if we are not playing”, and verified it on the short-tracks station.

    I asked Cursor to update the build number to 61 (hindsight: do not ask it to do that!), which it did,and we committed again.

    Hubris strikes

    I figured, we’ve made so much work that didn’t before, that we should take on a hard one. I proposed that we change the UI so that the items on the now-playing screen shifted around when the device was rotated to landscape (or we split on the iPad): the big album cover would move left, and the controls would all move to the right.

    I am going to draw a veil over this. It did not go well, at all.

    Cursor got what I wanted, and tried to replicate the storyboard in SwiftUI, FAIL.

    Cursor couldn’t edit storyboards, and tried to walk me through it by describing it. FAIL.

    We tried to eliminate the storyboard, and do it all in code. FAIL.

    After the SwiftUI failure, Cursor started getting reluctant to make changes itself, and I kept having to ask it to make the changes it proposed.

    I eventually gave this up as a bad job, and reverted to the so-far working code.

    Two hours or so.

    Just one more thing…

    All went well the next day until a live show came on. All of a sudden, metadata updates failed completely. We were getting the fallback data that corresponded to “metadata server is down”.

    Initially, I did what any good sysadmin does: assume it was my error until proved otherwise. I checked the Azuracast logs. I verified that the metadata monitor that fills the #now-playing channel on our Discord was working. That was all okay; the non-app monitor could see the show metadata and was displaying it.

    I asked Cursor to help me trace the issue. It forgot about my debug and inserted its own, again, but it was sufficient for me to detect that the app thought the metadata server was offline, despite the fact it was capturing metadata. Cursor updated the code in the websocket parser to check that it got the is_online flag and that it was true, and the metadata started flowing again.

    I asked it to turn the built-in trace off, and revert any debugging it had added. It informed me there wasn’t any.

    After a moment, I thought, okay, it had gotten confused, and thought I meant “debugging specifically about whether the is_online flag is set”. So I said, please clean up any debug added since the last commit.

    The code is already set to use debugLevel = 0 by default, and there are no extra debug print statements present outside those controlled by the debugLevel flags. No additional debug output remains from recent changes.If you notice any specific debug output you want removed or further adjustments to verbosity, please point it out and I’ll take care of it!

    Uh, no.

    So I pointed out several things in the log that were debug statements. It removed the ones I told it to, but it stubbornly refused to believe there were any others.

    Fine. A git diff and some edits in Xcode, and I was ready to save this and make a new build, now 63. I set the build number in the Xcode UI, archvied…and Xcode cheerfully informed me I’d created build 61.

    What?

    I looked at Xcode. 63.

    Okay, I asked Cursor to set the build number before. Odds are it’s done something that made sense to it.

    Can you please set the build to 63? I’m updating the project but it doesn’t seem to be taking.

    <key>CFBundleVersion</key>
    <string>63</string>
    <key>LSRequiresIPhoneOS</key>

    Sigh. It wiped out the symbolic version and hard-coded it. Go look for the version that Xcode saves…okay, it’s in .pbxproj.

    Actually, can we embed CURRENT_PROJECT_VERSION from .pbxproj? that’d be a lot simpler than this.

    <key>CFBundleVersion</key>
    <string>$(CURRENT_PROJECT_VERSION)</string>
    <key>LSRequiresIPhoneOS</key>

    Archive…and build 63. Son of a…

    I pushed build 63 to TestFlight, committed everything, and pushed to main. Done.

    Conclusions, observations, and speculations

    • This was not a terrible idea.
      • I got the refactor done. It might have gone faster if I’d remembered to tell Cursor I had a callback mechanism in place.
      • Cursor did do well at finding things to debounce to cut down UI updates.
      • When I could see a bug, it was pretty easy to get Cursor to fix it. We went about 50-50 on who saw it and suggested a fix; I needed to be in there finding the weird ones.
    • I was probably more productive.
      • I had tried a couple times to refactor this and wasn’t successful. Cursor got me through it, so that’s a big improvement right there.
      • I had tried to fix the lockscreen metadata, specifically setting the cover, and hadn’t been able to. We managed to get that working together.
    • Some things do not translate well for LLMs.
      • Anything visual is a challenge. If I can see it and have a good guess at the problem, we can fix it fairly fast. If I don’t we end up slogging away, adding more and more debug until one of us spots the issue.
      • The attempts to update the UI for rotation were a disaster.
      • “Eventually fixing the bad constraint” went badly and slowly. Because Cursor can’t see, it really has no idea what to do to fix a UI issue.
    • Peculiar things happen.
      • When we started having issues on the UI code, Cursor seemed to become reluctant to make changes, and had to be asked to make them. I don’t know if that was because I came across as frustrated by how it was going or what, but that was weird.
      • Cursor insisting that there was no debug, and only removing it when I specifically said, “this is debug”, and refusing to believe it had done anything else. I even told it that it should check git. This was also after the point where it started not making changes unless it was directed to.

    Overall, I’d give the refactoring an A, the fixing of the lockscreen a B, and the UI work a C for the elimination of the constraint warning, and an F for the rotation that we never finished, for an average grade of a C+.

    Will I use Cursor again for Swift? Yes, but. For a pure rote exercise, it’s extremely useful. For large-scale or creative work, it’s not good at all.

  • Azuracast metadata redux

    Summary: all for naught, back to the original implementation, but with some guardrails

    Where we last left off, I was trying to get the LDSwiftEventSource library to play nice with iOS, and it just would not. Every way I tried to convince iOS to please let this thing run failed. Even the “cancel and restart” version was a failure.

    So I started looking at the option of a central server that would push the updates using notifications, and being completely honest, it seemed like an awful lot of work that I wasn’t all that interested in doing, and which would push the release date even further out.

    On reflection, I seemed to remember that despite it being fragile as far as staying connected, the websocket implementation was rock-solid (when it was connected). I went back to that version (thank heavens for git!) and relaunched…yeah, it’s fine. It’s fine in the background. All right, how can I make this work?

    Thinking about it for a while, I also remembered that there was a ping parameter in the connect message from Azuracast, which gave the maximum interval between messages (I’ve found in practice that this is what it means; the messages usually arrive every 15 seconds or so with a ping of 25). Since I’d already written the timer code once to force reboots of the SSE code, it seemed reasonable to leverage it like this:

    • When the server connects, we get the initial ping value when we process the first message successfully.
    • I double that value, and set a Timer that will call a method that just executes connect() again if it pops.
    • In the message processing, as soon as I get a new message, I therefore have evidence that I’m connected, so I kill the extant timer, process the message, and then set a new one.

    This loops, so each time I get a message, I tell the timer I’m fine, and then set a new one; if I ever do lose connectivity, then the timer goes off and I try reconnecting.

    This still needs a couple things:

    • The retries should be limited, and do an exponential backoff.
    • I’m of two minds as to whether I throw up an indicator that I can’t reconnect to the metadata server. On one hand, the metadata going out of sync is something I am going to all these lengths to avoid, so if I’m absolutely forced to do without it, I should probably mention that it’s no longer in sync. On the other hand, if we’ve completely lost connectivity, the music will stop, and that’s a pretty significant signal in itself. It strikes me as unlikely that I’ll be able to stream from the server but not contact Azuracast, so for now I’ll just say nothing. Right now, I fall back to showing the channel metadata, so we still see we’re on RadioSpiral, but not what’s actually playing — just like when I didn’t have a working metadata implementation at all.

    I’m running it longer-term to see how well it performs. Last night I got 4 hours without a drop on the no-timer version; I think this means that drops will be relatively infrequent, and we’ll mostly just schedule Timers and cancel them.

    Lockscreen follies

    I have also been trying to get the lock screen filled out so it looks nicer. Before I started, I had a generic lockscreen that had the station logo, name and slug line with a play/pause button and two empty “–:–” timestamps. I now have an empty image (boo) but have managed to set the track name and artist name and the play time. So some progress, some regress.

    The lockscreen setup is peculiar: you set as many of the pieces of data that you know in a struct supplied by iOS, and then call a method to commit it.

    I spent a lot of time trying to get the cover to appear and couldn’t, so I left it as the channel/station logo. [Update August 2025: I’ve managed to get Cursor to work through the mess and show all the metadata! Yay.]

  • Flutter experiences

    TL;DR: Flutter builds are as much fun as Java and Scala ones, and you spend more time screwing with the tools than you do getting anything done. I don’t think I’m going to switch, at least not now.

    As I’ve mentioned before on the blog, I maintain an iOS application for RadioSpiral’s online radio station. The app has worked well and successfully; the original codebase was Swift- Radio-Pro, which works as an iOS app and a MacOS one as well (I have been doing some infrastructure changes to support Azuracast, as previously documented on the blog.)

    We do have several, very polite, Android users who inquire from time to time if I’ve ported the radio station app to Android yet, and I have had to keep saying no, as the work to duplicate the app on Android looked daunting, and nobody is paying me for this. So I’ve been putting it off, knowing that I would have to learn something that runs on Android sooner or later if I wanted to do it at all.

    Randal Schwartz has been telling me for more than a year that I really should look at Dart and Flutter if I want to maintain something that works the same on both platforms, and I just didn’t have the spare time to learn it.

    Come the end of May 2023, and I found myself laid off, so I really had nothing but time. And I was going to need to update the app for IOS 16 anyway at that point (the last time I recompiled it, Xcode still accepted iOS 8 as a target!) and I figured now was as good a time as any to see if I could get it working multi-platform.

    I started looking around for a sample Flutter radio app, and found RadioSai. From the README, it basically does what I want, but has a bunch of other features that I don’t. I figured an app I could strip down was at least a reasonable place to start, so I checked it out of Github and started to work.

    Gearing up

    Setting up the infrastructure Installing Dart and Flutter was pretty easy: good old Homebrew let me brew install flutter to get those in place, and per instructions, I ran flutter doctor to check my installation. It let me know that I was missing the Android toolchain (no surprise there, since I hadn’t installed
    anything there yet). I downloaded the current Android Studio (Flamingo in my case), opened the .dmg, and copied it into /Applications as directed.

    Rerunning flutter doctor, it now told me that I didn’t have the most recent version of the command-line tools. I then fell into a bit of a rabbit hole. Some quick Googling told me that the command line tools should live inside Android Studio. I ferreted around in the application bundle and they were just Not There. I went back to the Android Studio site and downloaded them, and spent a fair amount of time trying to get sdkmanager into my PATH correctly. When I finally did, it cheerfully informed me that I had no Java SDK. So off to the OpenJDK site, and download JDK 20. (I tried a direct install via brew install, but strangely Java was still /usr/bin/java, and I decided rather than tracking down where the Homebrew Java went, I’d install my own where l could keep an eye on it.

    I downloaded the bin.tar.gz file and followed the installation instructions, adding the specified path to my PATH… and still didn’t have a working Java. Hm. Looking in the OpenJDK directory, the path was Contents, not jdk-18.0.1.jdk/Contents. I created the jdk-18.0.1 directory, moved Contents into it and had a working Java! Hurray! But even with dorking around further with the PATH, I still couldn’t get sdkmanager to update the command-line tools properly.

    Not that way, this way

    A little more Googling turned up a Stack Overflow post that told me to forget about installing the command-line tools myself, and to get Android Studio to do it. Following those instructions and checking all the right boxes, flutter doctor told me I had the command-line tools, but that I needed to accept some licenses. I ran the command to do that, and finally I had a working Flutter install!


    Almost.

    When I launched Android Studio and loaded my project, it failed with flutter.sdk not defined. This turned out to mean that I needed to add

    flutter.sdk=/opt/homebrew/Caskroom/flutter/ 3.10.5/flutter

    (the location that Homebrew had used to unpack Flutter — thank you find) to local.properties. After that, Gradle twiddled its fingers a while, and declared that the app was ready. (It did want to upgrade the build, and I let it do that.)

    Build, and…

    The option 'android.enableR8' is deprecated. 
    It was removed in version 7.0 of the
    Android Gradle plugin. 
    Please remove it from 'gradle.properties". 

    Okay, I remove it.

    /Users/joemcmahon/Code/radiosai/.dart_tool/ does not exist.

    More Googling, Stack Overflow says Run Tools > Flutter > Pub Get. Doesn’t exist. Okaaaaaay.

    There’s a command line version:

    flutter clean; flutter pub get

    Deleted dart_tool, then recreated it with package_config.json there. Right!

    Back to Android Studio, still confused about the missing menu entry, and build again. Gradle runs, downloads a ton of POMs and

    Couldn't resolve the package 'radiosai' in 'package:radiosai/audio_service/service_locator.dart'.

    Looking one level up, in :app:compileFlutterBuildDebug, Invalid depfile: /Users/joemcmahon/ Code/radiosai/.dart_tool/flutter_build/bff84666834b820d28a58a702f2c8321/ kernel_snapshot.d.

    Let’s delete those and see if that helps…yes, but still can’t resolve
    radiosai. Okay, time for a break.

    Finally, a build!

    Another Google: I wasn’t able to resolve the package because I needed to pub get again.

    Module was compiled with an incompatible version of Kotlin. 

    The binary version of its metadata is 1.8.0, expected version is 1.6.0. Another Google. One of the build Gradle files is specifying Kotlin 1.6…it’s in /android/ build.gradle. Update that to 1.8.10, build…Kotlin plugin is being loaded, good. Couple
    warnings, still going, good.

    BUILD SUCCESSFUL

    Nice! Now, how do I test this thing? Well, there’s Device Manager over on the right, that looks promising. There’s a “Pixel 3a” entry and a “run” button. What’s the worst that could happen?

    Starts up, I have a “running device” that’s a couple inches tall, on its home screen. Hm. Ah, float AND zoom. Cool. Now I realize I have no idea how to run an Android phone, and I don’t see the app.

    https://developer.android.com/studio/run/emulator…nope. Beginning to remember why I didn’t like working in Scala… Gradle upgrade recommended, okay, and now

    Namespace not specified. Please specify a namespace in the module's build.gradle. 

    Specified, still broken…googling…This is a known issue –
    https://github.com/ionic-team/capacitor/issues/6504

    If you are using Capacitor 4, do not upgrade to Gradle 8.


    Yeah, I remember why I stopped liking Scala. git reset to put everything back…

    Execution failed for task:gallery_saver:compileDebugKotlin'. 
    > compileDebugJavaWithJavac task (current target is 1.8) and 'compileDebugKotlin' task
    (current target is 17) 
    jvm target compatibility should be set to the same Java version.
    Consider using JVM toolchain: https://kotl.in/gradle/jvm/toolchain 

    Fix android/app/build.gradle so everyone thinks we’re using Java 17, which uses a different syntax, ugh.

    Fix it again. Same for the Kotlin target too.

    'compileDebugJavaWithJavac' task (current target is 1.8) and 'compileDebugKotlin' task (current target is 17) jvm target compatibility should be set to the same Java version.

    This is apparently actually Gradle 8 still lying around after the (incorrectly) recommended upgrade. Removing ~/ gradle to nuke from orbit. Also killing android/.gradle.


    [Aside: I am used to using git grep to find things, and it is just not finding them in this repo!]

    Cannot read the array length because "" is null

    WHAT.

    Apparently this means that Gradle 8 is still lurking. Yep, the rm ~/.gradle/* didn’t remove everything because of permissions. Yougoddabefuckingkiddingme. Sudo’ed it, relaunched with the fixes I made above. App runs!


    However it stops working after a bit with no reason indicating why. Let’s stop it and restart. Stop button did not stop it; had to quit Android Studio.

    Well. Okay. This is not promising, but let’s see the benefit of using Flutter; we’ll check out if the iOS side works. Seems a lot more straightforward, though I’m not doing much in Xcode. cd iOS, launch the simulator (important!), flutter run…and we get the Flutter demo project. Looks like the IOS version wasn’t brought over from the Android side. Why did you even do this.

    Do we all remember that I wanted something that worked on both platforms? I do. We don’t. Gah.

    So I’m putting Flutter aside, cleaning up the ton of disk space all this extra infrastructure took up, and will maybe come back to it another time.

    But for right now, the amount of work involved is absolutely not worth it because I’d have to write the damn thing from scratch anyway.

    Maybe I’ll run this through one of the LLMs and see if it can get me a common codebase as a starting point, but I am not sanguine.

    [Note from the future: my fellow DJ from RadioSpiral, DJ Cosmos, has written a Go/Flutter implementation that works great on Linux and Android, so I don’t have to do this anymore!]

  • Azuracast high-frequency updates, SSE, and iOS background processes

    A big set of learning since the last update.

    I’ve been working on getting the RadioSpiral infrastructure back up to snuff after our Azuracast streaming server upgrade. We really, really did need to do that — it just provides 90% of everything we need to run the station easily right out of the box.

    Not having to regenerate the playlists every few weeks is definitely a win, and we’re now able to easily do stuff like “long-play Sunday”, where all of the tracks are long-players of a half-hour or more.

    But there were some hitches, mostly in my stuff: the iOS app and the now-playing Discord bot. Because of reasons (read: I’m not sure why), the Icecast metadata isn’t available from the streaming server on Azuracast, especially when you’re using TLS. This breaks the display of artist and track on the iOS app, and partially breaks the icecast-monitor Node library I was using to do the now-playing bot in Discord.

    (Side note: this was all my bright idea, and I should have tested the app and bot against Azuracast before I proposed cutting over in production, but I didn’t. I’ll run any new thing in Docker first and test it better next time.)

    Azuracast to the rescue

    Fortunately, Azuracast provides excellent now-playing APIs. There a straight-up GET endpoint that returns the data, and two event-driven ones (websockets and SSE). Even a “look, just read this file, it’s there” version.

    The GET option depends on you polling the server for updates, and I didn’t like that on principle; the server is quite powerful, but I don’t want multiple copies of the app hammering it frequently to get updates, and it was inherently not going to be close to a real-time update unless I really did hammer the server.

    So that was off the table, leaving websockets and SSE, neither of which I had ever used. Woo, learning experience. I initially tried SSE in Node and didn’t have a lot of success with it, so I decided to go with websockets and see how that went.

    Pretty well actually! I was able to get a websocket client running pretty easily, so I decided to try it that way. After some conferring with ChatGPT, I put together a library that would let me start up a websocket client and run happily, waiting for updates to come in and updating the UI as I went. (I’ll talk about the adventures of parsing Azuracast metadata JSON in another post.)

    I chose to use a technique that I found in the FRadioPlayer source code, of declaring a public static variable containing an instance of the class; this let me do

    import Kingfisher
    import ACWebSocketClient
    
    client = ACWebSocketClient.shared
    ...
    tracklabel.text = client.status.track
    artistlabel.text = client.status.artist
    coverImageView.kf.getImage(with:client.status.artURL)

    (Kingfisher is fantastic! Coupled with Azuracast automatically extracting the artwork from tracks and providing a URL to it, showing the right covers was trivial. FRadioPlayer uses the Apple Music cover art API to get covers, and given the, shall we say, obscure artists we play, some of the cover guesses it made were pretty funny. And sometimes really inappropriate.)

    Right. So we have metadata! Fantastic. Unfortunately, the websocket client uses URLSessionWebSocketTask to manage the connection, and that class has extremely poor error handling. It’s next to impossible to detect that you’ve lost the connection or re-establish it. So It would work for a while, and then a disconnect would happen, and the metadata would stop updating.

    Back to the drawing board. Maybe SSE will work better in Swift? I’ve written one client, maybe I can leverage the code. And yes, I could. After some searching on GitHub and trying a couple of different things, I created a new library that could do Azuracast SSE. (Thank you to LaunchDarkly and LDSwiftEventSource for making the basic implementation dead easy.)

    So close, but so far

    Unfortunately, I now hit iOS architecture issues.

    iOS really, really does not want you to run long-term background tasks, especially with the screen locked. When the screen was unlocked, the metadata updates went okay, but as soon as the screen locked, iOS started a 30-second “and what do you think you’re doing” timer, and killed the metadata monitor process.

    I tried a number of gyrations to keep it running and schedule and reschedule a background thread, but if I let it run continuously, even with all the “please just let this run, I swear I know what I need here” code, iOS would axe it within a minute or so.

    So I’ve fallen back to a solution not a lot better than polling the endpoint: when the audio starts, I start up the SSE client, and then shut it down in 3 seconds, wait 15 seconds, and then run it again. When audio stops, I shut it off and leave it off. This has so far kept iOS from nuking the app, but again, I’m polling. Yuck.

    However, we now do have metadata, and that’s better than none.

    [From the future: this just was awful. I abandoned it and went back to the websockets. New update coming soon about some optimizations to save battery.]

    On the other hand…

    On the Discord front, however, I was much more successful. I tried SSE in Node, and found the libraries wanting, so I switched over to Python and was able to use sseclient to do the heavy lifting for the SSE connection. It essentially takes an SSE URL, hooks up to the server, and then calls a callback whenever an event arrives. That was straightforward enough, and I boned up on my Python for traversing arbitrary structures — json.loads() did a nice job for me of turning the complicated JSON into nested Python data structures.

    The only hard bit was persuading Python to turn the JSON struct I needed to send into a proper query parameter. Eventually this worked:

    subs = {
            "subs": {
                f"station:{shortcode}": {"recover": True}
            }
         }
    
    json_subs = json.dumps(subs, separators=(',', ':'))
    json_subs = json_subs.replace("True", "true").replace("False", "false")
    encoded_query = urllib.parse.quote(json_subs)

    I pretty quickly got the events arriving and parsed, and I was able to dump out the metadata in a print. Fab! I must almost be done!

    But no. I did have to learn yet another new thing: nonlocal in Python.

    Once I’d gotten the event and parsed it and stashed the data in an object, I needed to be able to do something with it, and the easiest way to do that was set up another callback mechanism. That looked something like this:

    client = build_sse_client(server, shortcode)
    run(client, send_embed_with_image)

    The send_embed_with_image callback puts together a Discord embed (a fancy message) and posts it to our Discord via a webhook, so I don’t have to write any async code. The SSE client updates every fifteen seconds or so, but I don’t want to just spam the channel with the updates; I want to compare the new update to the last one, and not post if the track hasn’t changed.

    I added a method to the metadata object to compare two objects:

    def __eq__(self, other) -> bool:
        if not isinstance(other, NowPlayingResponse):
            return False
        if other is None:
            return False
        return (self.dj == other.dj and
                self.artist == other.artist and
                self.track == other.track and
                self.album == other.album)

    …but I ran into a difficulty trying to store the old object: the async callback from my sseclient callback couldn’t see the variables in the main script. I knew I’d need a closure to put them in the function’s scope, and I was able to write that fairly easily after a little poking about, but even with them there, the inner function I was returning still couldn’t see the closed-over variables.

    The fix was something I’d never heard of before in Python: nonlocal.

    def wrapper(startup, last_response):
        def sender(response: NowPlayingResponse):
            nonlocal startup, last_response
            if response == last_response:
                return
    
            # Prepare the embed data
            local_tz = get_localzone()
            start = response.start.replace(tzinfo=local_tz)
            embed_data = {
                "title": f"{response.track}",
                "description": f"from _{response.album}_ by {response.artist} ({response.duration})",
                "timestamp": start,
                "thumbnail_url": response.artURL,
            }
    
            # Send to webhook
            send_webhook(embed_data)
    
            startup = False
            last_response = response
    
        return sender

    Normally, all I’d need to do would be have startup and last_response in the outer function’s argument list to have them visible to the inner function’s namespace, but I didn’t want them to just be visible: I wanted them to be mutable. Adding the nonlocal declaration of those variables does that. (If you want to learn more about nonlocal, this is a good tutorial.)

    The Discord monitor main code now looks like this:

    startup = True
    last_response = None
    
    # Build the SSE client
    client = build_sse_client(server, shortcode)
    
    # Create the sender function and start listening
    send_embed_with_image = wrapper(startup, last_response)
    run(client, send_embed_with_image)

    Now send_embed_with_image will successfully be able to check for changes and only send a new embed when there is one.

    One last notable thing here: Discord sets the timestamp of the embed relative to the timezone of the Discord user. If a timezone is supplied, then Discord does the necessary computations to figure out what the local time is for the supplied timestamp. If no zone info is there, then it assumes UTC, which can lead to funny-looking timesstamps. This code finds the timezone where the monitor code is running, and sets the timestamp to that.

    from tzlocal import get_localzone
    
    local_tz = get_localzone()
    start = response.start.replace(tzinfo=local_tz)

    And now we get nice-looking now-playing info in Discord:

    Shows two entries in a Discord channel, listing track title in bold, album name in italics, and artist name, with a start time timestamp and a thumbnail of the album cover.

    Building on this

    Now that we have a working Python monitor, we can now come up with a better solution to (close to) real-time updates for the iOS app.

    Instead of running the monitor itself, the app will register with the Python monitor for silent push updates. This lets us offload the CPU (and battery) intensive operations to the Python code, and only do something when the notification is pushed to the app.

    [Note: no, it’s not doing that.]

    But that’s code for next week; this week I need to get the iOS stopgap app out, and get the Python server dockerized.

  • Swift Dependency Management Adventures

    I’m in the process of (somewhat belatedly) upgrading the RadioSpiral app to work properly with Azuracast.

    The Apple-recommended way of accessing the stream metadata just does not work with Azuracast’s Icecast server – the stream works fine, but the metadata never updates, so the app streams the music but never updates the UI with anything.

    Because it could still stream (heh, StillStream) the music, we decided to go ahead and deploy. There were so many other things that Azuracast fixed for us that there was no question that decreasing the toil for everyone (especially our admin!) was going to make a huge difference.

    Addressing the problem

    Azuracast supplies an excellent now-playing API in four different flavors:

    • A file on the server that has now-playing data, accessible by simply getting the contents of the URL. This is only updated every 30 seconds or so, which isn’t really good enough resolution, and requires the endpoint be polled.
    • An API that returns the now-playing data as of the time of the request via a plain old GET to the endpoint. This is better but still requires polling to stay up to date, and will still not necessarily catch a track change unless the app polls aggressively, which doesn’t scale well.
    • Real-time push updates, either via SSE over https or websocket connection. The push updates are less load on the server, as we don’t have to go through session establishment every time; we can just use the open connection and write to it. Bonus, the pushes can happen at the time the events occur on the server, so updates are sent exactly when the track change occurs.

    I decided that the websocket API was a little easier to implement. With a little help from ChatGPT to get me an initial chunk of code (and a fair amount of struggling to figure out the proper parameters to send for the connection request),

    I used a super low-rent SwiftUI app to wrap AVAudioSession and start up a websocket client separately to manage the metadata; that basically worked and let me verify that the code to monitor the websocket was working.

    I was able to copy that code inside of FRadioPlayer, the engine that RadioSpiral uses to do the streaming, but then I started running into complications.

    Xcode, Xcode, whatcha gonna do?

    I didn’t want to create an incompatible fork of FRadioPlayer, and I felt that the code was special-purpose enough that it wasn’t a reasonable PR to make. In addition, it was the holidays, and I didn’t want to force folks to have to work just because I was.

    So I decided to go a step further and create a whole new version of the FRadioPlayer library, ACRadioPlayer, that would be specifically designed to be used only with Azuracast stations.

    Initially, this went pretty well. The rename took a little extra effort to get all the FRadio references switched over to ACRadio ones, but it was fairly easy to get to a version of the library that worked just like FRadioPlayer, but renamed.

    Then my troubles began

    I decided that I was going to just include the code directly in ACRadioPlayer and then switch RadioSpiral to the new engine, so I did that, and then started trying to integrate the new code into ACRadioPlayer. Xcode started getting weird. I kept trying to go forward a bit at a time — add the library, start trying to include it into the app, get the fetch working…and every time, I’d get to a certain point (one sample app working, or two) and then I’d start getting strange errors: the class definition I had right there would no longer be found. The build process suddenly couldn’t write to the DerivedData directory anymore. I’d git reset back one commit, another, until I’d undone everything. Sometimes that didn’t work, and I had to throw away the checkout and start over. The capper was “Unexpected error”, with absolutely nothing to go on to fix it.

    Backing off and trying a different path

    So I backed all the way out, and started trying to build up step-by-step. I decided to try building the streaming part of the code as a separate library to be integrated with ACRadioPlayer, so I created a new project, ACWebSocketClient, and pulled the code in. I could easily get that to build, no surprise, it had been building, and I could get the tests of the JSON parse to pass, but when I tried to integrate it into ACRadioPlayer using Swift Package Manager, I was back to the weird errors again. I tried for most of a day to sort that out, and had zero success.

    The next day, I decided that maybe I should follow Fatih’s example for FRadioPlayer and use Cocoapods to handle it. This went much better.

    Because of the way Cocoapods is put together, just building the project skeleton actually gave me some place to put a test app, which was much better, and gave me a stepping stone along the way to building out the library. I added the code, and the process of building the demo showed me that I needed to do a few things: be more explicit about what was public and what was private, and be a little more thoughtful about the public class names.

    A couple hours work got me a working demo app that could connect to the Azuracast test station and monitor the metadata in real time. I elected to just show the URL for the artwork as text because actually fetching the image wasn’t a key part of the API.

    I then hit the problem that the demo app was iOS only. I could run it on MacOS in emulation mode, but I didn’t have a fully-fledged Mac app to test with. (Nor did I have a tvOS one.) I tried a couple variations on adding a new target to build the Mac app, but mostly I ended up breaking the work I had working, so I eventually abandoned that.

    I then started working step by step to include the library in ACRadioPlayer. FRadioPlayer came with an iOS apps (UIKit and SwiftUI), a native Mac app, and a tvOS app. I carefully worked through getting the required versions of the OS to match in the ACWebSocketClient podspec, the ACRadioPlayer Podfile, and the ACRadioPlayer Xcode project. That was tedious but eventually successful.

    Current status

    I’ve now got the code properly pulled in, compatible with the apps, and visible to each of the apps. I’ll now need to pull in the actual code that uses it from the broken repo (the code was fine, it was just the support structures around it that weren’t) and get all the apps working. At that point I can get both of the libraries out on Cocoapods, and then start integrating with RadioSpiral.

    In general, this has been similar to a lot of projects I’ve worked on in languages complex enough to need an IDE (Java, Scala, and now Swift): the infrastructure involved in just getting the code to build was far more trouble to work with and maintain, and consumed far more time, than writing the code itself.

    Writing code in Perl or Python was perhaps less flashy, but it was a lot simpler: you wrote the code, and ran it, and it ran or it didn’t, and if it didn’t, you ran it under the debugger (or used the tests, or worse case, added print statements) and fixed it. You didn’t have to worry about whether the package management system was working, or if something in the mysterious infrastructure underlying the applications was misconfigured or broken. Either you’d installed it, and told your code to include it, or you hadn’t. Even Go was a bit of a problem in this way; you had to be very careful in how you got all the code in place and that you had gotten it in place.

    Overall, though, I”m pretty happy with Cocoapods and the support it has built in. Because FRadioPlayer was built using Cocoapods as its package management, I’m hoping that the process of integrating it into RadioSpiral won’t be too tough.

    [From the future: it was, and I ended up abandoning that too.]

  • JSON, Codable, and an illustration of ChatGPT’s shortcomings

    A little context: I’m updating the RadioSpiral app to use the (very nice) Radio Station Pro API that gives me access to useful stuff like the station calendar, the current show, etc. Like any modern API, it returns its data in JSON, so to use this in Swift, I need to write the appropriate Codable structs for it — this essentially means that the datatypes are datatypes that Swift either can natively decode, or that they’re Codable structs.

    I spent some time trying to get the structs right (the API delivers something that makes this rough, see below), and after a few tries that weren’t working, I said, “this is dumb, stupid rote work – obviously a job for ChatGPT.”

    So I told it “I have some JSON, and I need the Codable Swift structs to parse it.” The first pass was pretty good; it gave me the structs it thought were right and some code to parse with – and it didn’t work. The structs looked like they matched: the fields were all there, and the types were right, but the parse just failed.

    keyNotFound(CodingKeys(stringValue: "currentShow", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "broadcast", intValue: nil)], debugDescription: "No value associated with key CodingKeys(stringValue: \"currentShow\", intValue: nil) (\"currentShow\").", underlyingError: nil))

    Just so you can be on the same page, here’s how that JSON looks, at least the start of it:

    {
    	"broadcast": {
    		"current_show": {
    			"ID": 30961,
    			"day": "Wednesday",
    			"date": "2023-12-27",
    			"start": "10:00",
    			"end": "12:00",
    			"encore": false,
    			"split": false,
    			"override": false,
    			"id": "11DuWtTE",
    			"show": {...

    I finally figured out that Swift, unlike Go, must have field names that exactly match the keys in the incoming JSON. So if the JSON looks like {broadcast: {current_show... then the struct modeling the contents of the broadcast field had better have a field named current_show, exactly matching the JSON. (Go’s JSON parser uses annotations to map the fields to struct names, so having a field named currentShow is fine, as long as the annotation says its value comes from current_show. That would look something like this:

    type Broadcast struct {
        currentShow  CurrentShow `json:currentShow`
        ...
    }
    
    type CurrentShow struct {
       ... 

    There’s no ambiguity or translation needed, because the code explicitly tells you what field in the struct maps to what field in the JSON. (I suppose you could completely rename everything to arbitrary unrelated names in a Go JSON parse, but from a software engineering POV, that’s just asking for trouble.)

    Fascinatingly, ChatGPT sort of knows what’s wrong, but it can’t use that information to fix the mistake! “I apologize for the oversight. It seems that the actual key in your JSON is “current_show” instead of “currentShow”. Let me provide you with the corrected Swift code:”. It then provides the exact same wrong code again!

    struct Broadcast: Codable {
        let currentShow: BroadcastShow
        let nextShow: BroadcastShow
        let currentPlaylist: Bool
        let nowPlaying: NowPlaying
        let instance: Int
    }

    The right code is

    struct Broadcast: Codable {
        let current_show: BroadcastShow // exact match to the field name
        let next_show: BroadcastShow.   // and so on...
        let current_playlist: Bool
        let now_playing: NowPlaying
        let instance: Int
    }

    When I went through manually and changed all the camel-case names to snake-case, it parsed just fine. (I suppose I could have just asked ChatGPT to make that correction, but after it gets something wrong that it “should” get right, I tend to make the changes myself to be sure I understood it better than the LLM.)

    Yet another illustration that ChatGPT really does not know anything. It’s just spitting out the most likely-looking answer, and a lot of the time it’s close enough. This time it wasn’t.

    On the rough stuff from the API: some fields are either boolean false (“nothing here”) or a struct. Because Swift is a strongly-typed language, this has to be dealt with via an enum and more complex parsing. At the moment, I can get away with failing the parse and using a default value if this happens, but longer-term, the parsing code should use enums for this. If there are multiple fields that do this it may end up being a bit of a combinatorial explosion to try to handle all the cases, but I’ll burn that bridge when I come to it.

  • Life in the fast lane / Surely makes you lose your mind

    I came back to the Radiospiral iOS app after some time away (we’re trying to dope out what’s going on with metadata from various broadcast setups appearing in the wrong positions on the “now playing” screen, and we need a new beta with the test streams enabled to try things), only to discover that Fastlane had gotten broken in a very unintuituve manner. Whenever I tried to use it, it took a crack at building things, then told me I needed to update the snapshotting Swift file.

    Okay, so I do that, and the error persists. Tried a half-dozen suggestions from Stack Overflow. Error persists. I realized I was going to need to do some major surgery and eliminate all the variables if I was going to be able to make this work.

    What finally fixed it was cleaning up multiple Ruby installs and getting down to just one known location, and then using Bundler to manage the Fastlane dependencies. The actual steps were:

    1. removing rvm
    2. removing rbenv
    3. brew install ruby to get one known Ruby install
    4. making the Homebrew Ruby my default ( export PATH=/usr/local/Cellar/ruby/2.7.0/bin:$PATH)
    5. rm -rf fastlane to clear out any assumptions
    6. rm Gemfile* to clean up any assumptions by the current, broken Fastlane
    7. bundle install fastlane (not gem install!) to get a clean one and limit the install to just my project
    8. bundle exec fastlane init to get things set up again

    After all that, fastlane was back to working, albeit only via bundle exec, which in hindsight is actually smarter.

    The actual amount of time spent trying to fix it before giving up and removing every Ruby in existence was ~2 hours, so take my advice and make sure you are absolutely sure which Ruby you are running, and don’t install fastlane into your Ruby install; use bundler. Trying to fix it with things going who knows where…well, there’s always an applicable xkcd.

    You are in a maze of Python installations, all different

  • App Store Connect usability issues

    Allow me to be the Nth person to complain about App Store Connect’s lack of transparency, I’m currently working on an app for radiospiral.net’s net radio station, and I’m doing my proper dilligence by getting it beta tested by internal testers before pushing it to the App Store. I’m using TestFlight to keep it as simple as possible (and because fastlane seems to work well with that setup).

    I managed to get two testers in play, but I was trying to add a third today and I could not get the third person to show up as an internal tester because I kept missing a step. Here’s how it went, with my mental model in brackets:

    • Go to the users and groups page and add the new user. [okay, the new user’s available now].
    • Add them to the same groups as the other tester who I got working. [right, all set up the same…]
    • Added the app explicitly to the tester. […and they’ve got the app]
    • Mail went out to the new tester. [cool, the site thinks they should be a tester] [WRONG]
    • Tester installs Testflight and taps the link on their device. Nothing appreciable happens. [Did I set them up wrong?]
    • Delete the user, add them again. [I’ll set them up again and double-check…yes, they match]
    • They tap again. Still nothing. [what? but…]
    • Go over to the Testflight tab and look at the list of testers. Still not there. [I added them. why are they not there?] [also wrong]

    Much Googling and poking about got me nothing at all. Why is the user I added as an internal tester not there? They should be in the list.

    I went back to the page and this time I saw the little blue plus in a circle. I have to add them here too! Clicked the +, and the new user was there, waiting to be added to the internal testers.

    Sigh.

    So now I have blogged this so I can remember the process, and hopefully someone else who’s flailing around trying to figure out why internal testers aren’t showing up on the testers list will find this.

  • Reducing Google access for Pokemon GO

    Pokemon GO players on iOS: the new release today (7/12/16, in the App Store now) reduces the information it wants from your Google account from “full access” to your email and “know who you are on Google”. If you were already signed up, do this:

    • Go to accounts.google.com; log in if you’re not already logged in
    • Go to https://security.google.com/settings/security/permissions
    • Click on “Pokemon GO release”
    • Revoke privileges
    • Go to your iOS device
    • Download the updated app, wait for it to reinstall
    • Kill the app; if you don’t know how to do this, just power your phone off and back on again
    • Launch Pokemon GO; it’ll fail to get access to your account. THIS IS OK.
    • Tap “try another account”
    • Log back in with your Google username and password.
    • This time it should ask for only “know your email” and “know who you are”.

    At the time I write this, it looks like many people are doing this, as the Pokemon GO servers are rendering the server overload screen:

    IMG_2165

    For the paranoid: It sounds like the iOS programmers just screwed up and released without reducing the account permissions request; this is not a nefarious scheme to steal all your email and Google+ naked selfies. From Niantic (via Kotaku):

    We recently discovered that the Pokémon GO account creation process on iOS erroneously requests full access permission for the user’s Google account. However, Pokémon GO only accesses basic Google profile information (specifically, your User ID and email address) and no other Google account information is or has been accessed or collected. [Emphasis mine – JM] Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google profile information, in line with the data that we actually access. Google has verified that no other information has been received or accessed by Pokémon GO or Niantic. Google will soon reduce Pokémon GO’s permission to only the basic profile data that Pokémon GO needs, and users do not need to take any actions themselves.