grid of four abstract images of flowers with swirling blurs of yellow, orange, and brown, created by deliberate camera movements

So yesterday evening I was waiting for my wife to join me to watch the TV. Earlier in the day I’d looked at the Daily Create. This was a challenge to use ICM. I read the suggested information on Intentional camera movement and realised I had none of the suggested equipment. A quick search found several articles about using the Slow Shutter app on an iPhone. I not only had my phone in my hand, I already had the app.

As it was dark outside and I only had a few minutes, I played around with a vase of daffodils and a bunch of dried flowers in the living room. I can’t say I was completely enamoured with the results, but could see it might be fun to play around some more.

I thought to share one of the photos on Mastodon in response to the Daily Create. Looking at the pictures I’d taken I thought a grid of four might be nice. I could have just uploaded four photos to a toot, but decided to run a shortcut to combine the images. It then turned out the shortcut was broken. I had a quick look and it seems okay, but it was old. I’d not used it for a few years. Not being one for shortcuts I didn’t think I could fix it in the couple of minutes before my wife arrived and we would start watching.

If I’d been on my Mac I’ve got many ways of doing this in less than a minute, but I am on my phone. I briefly thought of combining in Keynote and exporting but that would be a faff on the phone.

I then thought Claude AI could probably make an artefact that I could use. Of course I could have just asked Claude to make me a combined image, but that, for some reason, didn’t cross my mind. I guess there are a myriad of websites that would do the same too.

Can you make a one-page website that will allow a user to select images and turn them into a grid to download

After that, I had to report the download didn’t work on my phone, twice, before the working webpage was made. I used the artefact to make my wee grid and popped it onto Mastodon.

It took me a couple more minutes to copy the code and add it to my Raspberry Pi, without leaving my chair. The Termius app let me connect to the Pi, create a new file, and paste in the code. Image Grid Creator.

I guess in a few more minutes of internet time, this sort of ridiculous workflow will be simplified and everywhere.

Like everyone else, I’ve been reading a lot for and against AI. I think it is very hard for most of us to know where this is going. I’ve not really dived in, but I’ve not ignored it. I’ve not paid for it either.

I’ve used AI to help think through options for buying a car, made a few web pages, and consulted it on shell commands and regex. I’ve enjoyed using it most when I’ve made something myself, but discussed approaches, asked for code snippets, or syntax checking of my own failed snippets.

The latter is where I’ve enjoyed it most. There is a lot of satisfaction in getting markup or simple JavaScript to work. It is not work for me but a wee hobby. Handing over to AI completely would not be much fun and would have a lot less satisfaction.

I’ve also enjoyed using telex.ai to make simple WordPress blocks. This process would be beyond my skills, but I can act as a product owner. I also know enough to stop telex running in circles.

Where this leaves us, I’ve no idea. Watching politics live on the BBC this lunchtime, I was not impressed with any of the politicians’ responses to questions around getting AI to pay for creative content it has scraped. I can’t say I’ve a better idea or understanding. Certainly interesting as well as ridiculous times!

I thought it might be worth noting this use of claide.ai. I’ve seen a wide variety of views on AI and its promise & pitfalls. When it comes to writing a wee bit of code I feel a lot of sympathy with Alan’s approach. But I have dabbled a bit and do so again this week.

I use gifsicle a bit for creating and editing gifs, it is a really powerful tool. I think I’ve a reasonable but limited understanding of how to use it. In the past I’ve used it for removing every second frame of the gif and adjusting the delay.

#!/bin/bash gifsicle -U -d 28 --colors 64 "$1" `seq -f "#%g" 0 2 20` -O3 -o "$2"

This is pretty crude and you need to manually edit the number of frames and guesstimate the new delay which will be applied to every frame1.

I know gifsicle can list the delays of each frame with the –info switch, but I do not know enough enough bash to use that information to create a new gif. I had a good idea of the pseudo code needed but I reckoned that the time it would take to read the man page and google my way to the bash syntax needed was too much for me.

This week I was trying to reduce a gif I’d made from a screen recording. It turned out a bit bigger than I had hoped. I tried a couple of application but didn’t make much of a dent. I decided to ask Claude:

I am using gifsicle/ I want to input a gif, and create a new one. Explode the gif, delete ever second frame and put an animated gif back together doubling the delay for each frame. So a gif with 20 frames will end up with 10 frames but take the same length of time. I’d like to deal with gifs that have different delays on different frames. So for example frame 1 and 2 the delays for these frames added together and applied to frame one of the new gif.

The original query had a few typos and spelling mistakes but Claude didn’t mind. After one wrong move, when Claude expected the gifsicle file name to be slightly different I got a working script and took my 957KB gif down to 352KB, that was the image at the top of the pos2t.

I had asked for the script to use gifsicle explode facility to export all of the frames. Which the script did, neatly in a temporary folder. As I typed up this post, looking at my original attempt, I realised I should not have asked for the script to explode the gif, but just grab every second frame from the original. This seemed more logical and perhaps economical, so I asked Claude to take that approach. The final script has been quickly tested and uploaded a gist: gif frame reduction in case anyone would find this useful.

Of course this has added to the pile of not quite formed reflections on AI and should we have anything to do with it. I don’t feel too guilty as I needed at least a little gifsicle knowhow to get started.

  1. The seq command just outputs a sequence of numbers from 0 to 20, counting up in steps of 2. The -f “#%g” makes it print a # before each number, which makes gifsicle understand it as a frame selection instead of a file name. The backticks around the seq command cause its output to be included as parameters in the gifsicle command line. I am sure I googled to work that out ↩︎
  2. There is a difference between the sizes reported by the Finder & WordPress. The wordPress ones are on the image caption. ↩︎

I occasionally use AppleScript to avoid repetitive tasks. I quite enjoy writing scripts. My scripting tend to follow the guess and check procedure. It can be weeks or months between these efforts so I need to look up the library or search for many basic things1.

In my part time job, I quite often report findings or issues with the Glow Blogs system. In doing so I need to remember to include: computer model, OS version Browser and Browser version. This is quite simple but requires a few clicks and a bit of typing. I thought today I might create and AppleScript to do this for me.

I then thought I might use Claude.ai to get me started2. I used the prompt:

Can you give me a n applescript that will report on the current model of mac, the version of the OS and the current application & version?

I’ve noticed that AI doesn’t seem to be bothered with my constant typos!

The first script threw an error when I tested it. So I copied the error and the line it came from and sent it back to Claude. Another error. This continued through 10 versions. Sometimes I broke off to test an idea of my own or check the AppleScript Library and reported that to Claude.

This began to feel a bit like my usual process of testing, checking and reading. But a good bit quicker. We finally got there. I just added a line to copy to the clipboard and put the script in the Script Library so I can call it with FastScripts3.

For some reason I thanked Claude:

You’re welcome! The final script successfully gets:

1. The Mac’s hardware model

2. Current OS version

3. Active application name and version

Just for future reference – we went through several approaches before landing on the most reliable method using osascript to directly query the application’s version. This ended up being more robust than trying to get the version from file metadata or application paths.

Let me know if you need any modifications or have other AppleScript questions!

At this point I only had one free message left. So I asked Claude to refactor the script into a shell script. This it did with one error. With no more messages left, a search produced a fix pretty quickly. I ended up with this.

  1. This is true of all the scripty type things I do. Except for HyperCard which I used every day for years. ↩︎
  2. I’ve been using free versions of AI for a while now. A few times a week. Mostly for school things. Creating a text for pupils, or questions on a text. Claude is my current fav for scripting. ↩︎
  3. I think FastScript is one of my favourite mac applications. It is very handy for this sort of script as you can either select it from FastScripts menu or give it a keyboard shortcut. The Front application, which we are interested in, stays at the front. ↩︎