The bit I enjoyed most, though, was the punchline at the end. “The irony”, wrote Claude, “is that by calling LLMs ‘artificial intelligence’, we’re not just mischaracterising what these systems do; we’re also impoverishing our understanding of what human intelligence actually is.
Refreshing, straightforward piece from yesterday’s Observer.
I thought it might be worth noting this use of claide.ai. I’ve seen a wide variety of views on AI and its promise & pitfalls. When it comes to writing a wee bit of code I feel a lot of sympathy with Alan’s approach. But I have dabbled a bit and do so again this week.
I use gifsicle a bit for creating and editing gifs, it is a really powerful tool. I think I’ve a reasonable but limited understanding of how to use it. In the past I’ve used it for removing every second frame of the gif and adjusting the delay.
This is pretty crude and you need to manually edit the number of frames and guesstimate the new delay which will be applied to every frame1.
I know gifsicle can list the delays of each frame with the –info switch, but I do not know enough enough bash to use that information to create a new gif. I had a good idea of the pseudo code needed but I reckoned that the time it would take to read the man page and google my way to the bash syntax needed was too much for me.
This week I was trying to reduce a gif I’d made from a screen recording. It turned out a bit bigger than I had hoped. I tried a couple of application but didn’t make much of a dent. I decided to ask Claude:
I am using gifsicle/ I want to input a gif, and create a new one. Explode the gif, delete ever second frame and put an animated gif back together doubling the delay for each frame. So a gif with 20 frames will end up with 10 frames but take the same length of time. I’d like to deal with gifs that have different delays on different frames. So for example frame 1 and 2 the delays for these frames added together and applied to frame one of the new gif.
The original query had a few typos and spelling mistakes but Claude didn’t mind. After one wrong move, when Claude expected the gifsicle file name to be slightly different I got a working script and took my 957KB gif down to 352KB, that was the image at the top of the pos2t.
I had asked for the script to use gifsicle explode facility to export all of the frames. Which the script did, neatly in a temporary folder. As I typed up this post, looking at my original attempt, I realised I should not have asked for the script to explode the gif, but just grab every second frame from the original. This seemed more logical and perhaps economical, so I asked Claude to take that approach. The final script has been quickly tested and uploaded a gist: gif frame reduction in case anyone would find this useful.
Of course this has added to the pile of not quite formed reflections on AI and should we have anything to do with it. I don’t feel too guilty as I needed at least a little gifsicle knowhow to get started.
Original Gif 311KBReduced, every second frame gif 225KB
The seq command just outputs a sequence of numbers from 0 to 20, counting up in steps of 2. The -f “#%g” makes it print a # before each number, which makes gifsicle understand it as a frame selection instead of a file name. The backticks around the seq command cause its output to be included as parameters in the gifsicle command line. I am sure I googled to work that out↩︎
There is a difference between the sizes reported by the Finder & WordPress. The wordPress ones are on the image caption. ↩︎
Irresponsible AI companies are already imposing huge loads on Wikimedia infrastructure, which is costly both from a pure bandwidth perspective, but also because it requires dedicated engineers to maintain and improve systems to handle the massive automated traffic. And AI companies that do not attribute their responses or otherwise provide any pointers back to Wikipedia prevent users from knowing where that material came from, and do not encourage those users to go visit Wikipedia, where they might then sign up as an editor, or donate after seeing a request for support. (This is most AI companies, by the way. Many AI “visionaries” seem perfectly content to promise that artificial superintelligence is just around the corner, but claim that attribution is somehow a permanently unsolvable problem.)
A good post to read or listen to at the beginning of Scottish AI in Schools week . The article does not want the stable door closed.
Education Scotland are running a week #ScotAI25: Scottish AI in Schools 2025 with live lessons for pupils & some cpd for staff. I might try to make some of those.
This week I’ve used: ChatGPT to make some questions up about a passage of text for an individual in my class; Write an example text about levers; create a formula for a number spreadsheet and create a regular expression.
Claude to make a fractions matching game and a trivia quiz.
I am occasionally using lovable.dev to play around making an alternative way of posting to WordPress.
I might have used ChatGPT a couple more times in school. Although it is accessible the login options didn’t seem to be so I’ve no history to check.
Quite a few teachers I know use it in some of these ways in a, like me, fairly causal way. This is a lot easier than thinking about any ethical and moral implication.
This is the first Education Scotland podcast episode I’ve listened to. Solid food for thought. I’ve not developed any really solid ideas around AI in education but this helped me think of some questions. Ollie compared the uptake and development to AI to other technologies:
So the take up rate of generative AI, like ChatGTP, has been far quicker than people signing up to Facebook, you know, people adopting the internet, people getting a television, people getting radio, etc.
There was discussion of some ways that AI is already being used in schools including what Ollie described as lots of schools doing really, really good work around the ethics of AI.
I wonder what aspects of ethics are being discussed? The one I’ve thought of most is already out of the stable. All the material scraped by AI before we got a chance to choose. I’m not particularly worried about anything I put online being gobbled up by AI, but I imagine it would be more of concern for artists and writers who earn a living from content?
I think we also need to consider the ethics of all application & services we use in education. Especially when application make educational design decisions or have unethical behaviour1.
An interesting point was around developing AI to recreate traditional methods of education, but arguably in more efficient way. Ollie thinks that is probably missing how do we use the technology to do things that were unimaginable before?
I’ve read a bit about using AI in schools for report writing, analysing pupil data and the like and seen a few educational AI startups offering that sort of service. Most of the teachers I’ve talked to, like myself, have used it in a very basic way, cutting down some time in making a quiz or other classroom resources. We are just using ChartGPT, Copilot. etc in as fairly simplistic way.
The podcast talked about the need to update the Scottish Government’s technologies for learning strategy mentioning that it would take 10 years to bring this to publication. I can see a bit of a mismatch with the speed that technology is developing, especially AI. Can we plan that far ahead?
I used the AI application Aiko to generate the transcript to get the quotes.
I occasionally use AppleScript to avoid repetitive tasks. I quite enjoy writing scripts. My scripting tend to follow the guess and check procedure. It can be weeks or months between these efforts so I need to look up the library or search for many basic things1.
In my part time job, I quite often report findings or issues with the Glow Blogs system. In doing so I need to remember to include: computer model, OS version Browser and Browser version. This is quite simple but requires a few clicks and a bit of typing. I thought today I might create and AppleScript to do this for me.
I then thought I might use Claude.ai to get me started2. I used the prompt:
Can you give me a n applescript that will report on the current model of mac, the version of the OS and the current application & version?
I’ve noticed that AI doesn’t seem to be bothered with my constant typos!
The first script threw an error when I tested it. So I copied the error and the line it came from and sent it back to Claude. Another error. This continued through 10 versions. Sometimes I broke off to test an idea of my own or check the AppleScript Library and reported that to Claude.
This began to feel a bit like my usual process of testing, checking and reading. But a good bit quicker. We finally got there. I just added a line to copy to the clipboard and put the script in the Script Library so I can call it with FastScripts3.
For some reason I thanked Claude:
You’re welcome! The final script successfully gets:
1. The Mac’s hardware model
2. Current OS version
3. Active application name and version
Just for future reference – we went through several approaches before landing on the most reliable method using osascript to directly query the application’s version. This ended up being more robust than trying to get the version from file metadata or application paths.
Let me know if you need any modifications or have other AppleScript questions!
At this point I only had one free message left. So I asked Claude to refactor the script into a shell script. This it did with one error. With no more messages left, a search produced a fix pretty quickly. I ended up with this.
This is true of all the scripty type things I do. Except for HyperCard which I used every day for years. ↩︎
I’ve been using free versions of AI for a while now. A few times a week. Mostly for school things. Creating a text for pupils, or questions on a text. Claude is my current fav for scripting. ↩︎
I think FastScript is one of my favourite mac applications. It is very handy for this sort of script as you can either select it from FastScripts menu or give it a keyboard shortcut. The Front application, which we are interested in, stays at the front. ↩︎
After taking a screenshot I fed it back to the webpage (too tempting) and got:
Image Description: A screenshot of a webpage with the header “asuo-ai-labs.streamlit.app” shown in the top center. The screen is divided into two main sections. On the left, under the heading “Image and Details,” there is a section for uploading an image. A placeholder box labeled “Drag and drop file here” or “Browse files” is visible. Below it, there is a thumbnail of an image labeled “IMG_9865.jpeg” which is 2.8 MB in size. The image thumbnail shows a European Goldfinch perched on a budding branch against a clear blue sky. Below the thumbnail, there is a toggle switch labeled “Add details about the image,” which is turned on. A text box is filled in with the word “Goldfinch.”
On the right side of the webpage, there is a section labeled “Image Description.” Below it, a detailed description of the image is provided, as well as a short alt text. The image description reads: “The image features a European Goldfinch perched on a budding branch against a backdrop of a clear blue sky. The Goldfinch displays vibrant plumage with a distinctive red face, a mix of black and white on its head, a warm beige chest, and brownish wings with yellow accents. The branch, adorned with small, white buds and green leaves, stretches horizontally across the bottom right of the image, while the bird is slightly right of center. The serene sky comprises soft, light blue gradients without any visible clouds.”
The alt-text reads: “European Goldfinch perched on a budding branch against a clear blue sky.”
Alt-Text: Webpage with description and image of a European Goldfinch on a branch against a blue sky.
In the vast sea of educational blogs, The Learning Hub shines as a beacon of insightful content and valuable resources. With a dedication to excellence and a passion for learning, this blog has become a go-to destination for educators, students, and parents alike.