Likes blocktober.fun.
Idea: Create a block every day for October using Telex as the creation tool.
I had a quick try with Telex last month. This is something else!
Likes blocktober.fun.
Idea: Create a block every day for October using Telex as the creation tool.
I had a quick try with Telex last month. This is something else!
I saw a link to Telex – AI-Assisted Authoring Environment for WordPress Blocks this week and thought I would give it a try.
A few (eek, 10) years ago I tried to make a plug-in for WordPress that would take a gif url and an audio url, it would then, on the fly, make a static version of the gif. Clicking that would play the gif and loop the audio. I did get it working, eventually adding a dialogue to search for gifs on giphy & audio on freesound. I even managed to incorporate it into the tinyMCE editor in WordPress. It never got finished, but it was fun. I didn’t see any make a site for it: GifMovie.
Making that plug-in involved a big effort on my part, and a ton of searching. I’ve occasionally thought it might make a WordPress block, but didn’t know where to start. I have baby steps, php, JavaScript and css. I’ve occasionally manages to add something to WordPress that I’ve needed mostly through creating shortcode. Simple stuff far short of creating a block.
Test Telex, I thought something similar might be an idea. I simplified a bit leaving out the freesound and giphy searches.
On opening Telex you are shown a typical ai prompt box. But behind that is a WordPress site. I am presuming this is WordPress playground, everything in the browser? I am not familiar enough with playground to be sure. I put in the prompt:
I’d like a block that would allow me to add a gif from the media library. It would allow me to choose a sound from the media library. When the block loads it would show a static image from the gif, generated on the fly with JavaScript with a play button. Clicking the static image would show the gif and loop the audio file.
And off the ai went, showing me some codes scrolling past and telling me how many lines of code it had written. After a while I had the block in the editor in front of me!
I could upload a gif and a mp3 to the block and it showed a preview. All looking good, I could preview the block right in the page. When I went to look at the published page, it looked ok, clicking the image started the sound, but the image vanished.
So I reported this and the ai offered a fix. At that point things went a bit wrong. The page stopped loading and restarting the whole thing failed to load the editor. After a few tries I gave up as I’d run out of time.
This evening I thought I’d try again, but a on a desktop rather than my now aging, 8th gen iPad. As this is all linked to my WordPress account I just opened the project. Getting the same problem I reported it to the ai and it fixed it again. To no avail. I repeated this a couple of times and tested each iteration. After a few goes everything just worked.
I downloaded the plug-in, uploaded it to a test site and it worked fine there too.
I also ran the plugin check plugin and almost no few errors. Presumably because this sort of plugin has fewer opportunities to make mistakes.
I guess this is as near to pure vibe coding as you get? I didn’t see any code at all in the process or discuss it with the ai. I just reported the problem. There is a code view where you can see all of the files created. They look as if they are very well organised and commented. I am sure if I was learning to make blocks this would help a lot.
The few times I’ve asked Claude.ai or chatGPT to do some coding I’ve had more of a view and understanding of what is going on. I’ve also noticed that if chatGPT tried to fix something it either manages straightaway or just repeatedly fails. Telex made a better job of fixing things on at least this one off.
I wonder if this will eventually make its way in to WordPress itself? What sort of overhead would having a bunch of extra block plugins added?
I guess that this could be a good learning tool, but that might require a bit more discipline in reading the code produced and other tutorials on creating blocks. I do feel I’ve learnt something when I’ve DIYed some simple stuff. Not that I’ve retained a lot, that would need more frequent application on my behalf.
I am looking forward to watching the progress with Telex and see where it goes if it gets out of the experimental phase.
Gif my own creation, ripped from video years ago. Sound from https://samplefocus.com
A couple of days ago I saw a “guess the cubomania” challenge from Theo. I’ve had an interest in Cubomania in the past and played around with the idea a bit. After a chat with D. who gave me a few engravers I googled a bit and guessed, wrongly, Goya.
Next I thought to ask ChatGPT. It suggested it could match by image matching techniques, gave me a fairly obviously wrong first row and ran out of credit.
I then thought to ask Claude to make me an interactive page where I could drag things around. It made a couple of not very good attempts.
I was thinking about a better prompt, when I remembered and asked:
Could we use the whole image for each piece but ‘crop’ it with css?
Claude replied:
Brilliant idea! Yes, we can absolutely use CSS to create a “window” effect where each piece shows only its portion of the full image. This is much more elegant than trying to extract individual pieces.
I was flattered1 and when Claude came up with another fail I decided to abandon AI and DIY. This turned out a lot better. I started by remembering background-position and finding interact.js . The last time I did any drag and drop I dimly recall some sort of jQuery and a shim for mobile/tablets. interact.js did a grand job for my simple needs. It was probably overkill as it seems to do a lot more.
It is pretty simple stuff, but potentially a lot of fun, different images, making cubomania puzzles who knows. I did extend it a bit, learning about localStorage (to save any progress) and the dialogue tag. All without AI but few visits to HTML reference – HTML | MDN and the odd search.
I had a lot of fun with this, more than if I had just managed to get either of the AIs it to do the whole thing. What it did make me think of is that AI chat was useful for working out what I wanted to do and how to do it. I could probably have done that bit too all by myself. Usually I just start messing about and see what happens. This points to a bit of planning, or maybe typing some notes/pseudocode/outline might work for me when I am playing.
The Featured Image of this post was generated by ChatGPT in response to ” I want an image of a chatbot character chatting with a person, friendly, helpful & futuristic.” It has been run through Cubomania Gif!
I’ve been playing a little with WordPress yesterday. A while back I made the very simplest plugin to display my latest iNaturalist submissions. iNaturalist has a API so I made a short code that would then use JavaScript to pull in the pictures once the page loaded.
The only problem with that is that when the page loaded it just displayed a div with ‘loading’ then replaced that with the images when a script pulled that in. This appeared in the RSS feed too.
I thought that it might be better to do this server side so the images would show in an RSS feed.
This worked out ok once I had remembered lines need to end in semi-colons in php. It was still very basic so I ran it past Claude.ai and asked for security and caching advice. It made a couple of suggestions which I read up a little about and implemented.
I’ve tried using AI for a few code ideas and I am beginning to see what does and doesn’t work. What doesn’t work for me is to ask it to build a whole idea. This has nearly always ended up in problems which seem to loop around. What does work is to ask for somethings specific. In this case I uploaded the plugin to Claude and asked it to find any security problems. It did and suggested some fixes. I am sure that these are simple things that any WordPress developer would carry out without thinking about.
I’ve also found getting basic information around a function works well with AI. For example Claude suggested using the transient to cache the data from the API. Asking ChatGPT to explain transient gave me a quick handle on the function. (I am sure Claude would have explained too).
Anyway I have made some progress.
This was a good day:
The above produced with this shortcode:
[[inaturalist user="troutcolor" on="2024-07-30"]]
I’d now like to add some more ideas: names looking a little prettier than the description tooltip, maybe a lightbox view with more information and a link to iNaturalist. But I am not in any rush.
The bit I enjoyed most, though, was the punchline at the end. “The irony”, wrote Claude, “is that by calling LLMs ‘artificial intelligence’, we’re not just mischaracterising what these systems do; we’re also impoverishing our understanding of what human intelligence actually is.
Refreshing, straightforward piece from yesterday’s Observer.
I thought it might be worth noting this use of claide.ai. I’ve seen a wide variety of views on AI and its promise & pitfalls. When it comes to writing a wee bit of code I feel a lot of sympathy with Alan’s approach. But I have dabbled a bit and do so again this week.
I use gifsicle a bit for creating and editing gifs, it is a really powerful tool. I think I’ve a reasonable but limited understanding of how to use it. In the past I’ve used it for removing every second frame of the gif and adjusting the delay.
#!/bin/bash gifsicle -U -d 28 --colors 64 "$1" `seq -f "#%g" 0 2 20` -O3 -o "$2"
This is pretty crude and you need to manually edit the number of frames and guesstimate the new delay which will be applied to every frame1.
I know gifsicle can list the delays of each frame with the –info switch, but I do not know enough enough bash to use that information to create a new gif. I had a good idea of the pseudo code needed but I reckoned that the time it would take to read the man page and google my way to the bash syntax needed was too much for me.
This week I was trying to reduce a gif I’d made from a screen recording. It turned out a bit bigger than I had hoped. I tried a couple of application but didn’t make much of a dent. I decided to ask Claude:
I am using gifsicle/ I want to input a gif, and create a new one. Explode the gif, delete ever second frame and put an animated gif back together doubling the delay for each frame. So a gif with 20 frames will end up with 10 frames but take the same length of time. I’d like to deal with gifs that have different delays on different frames. So for example frame 1 and 2 the delays for these frames added together and applied to frame one of the new gif.
The original query had a few typos and spelling mistakes but Claude didn’t mind. After one wrong move, when Claude expected the gifsicle file name to be slightly different I got a working script and took my 957KB gif down to 352KB, that was the image at the top of the pos2t.
I had asked for the script to use gifsicle explode facility to export all of the frames. Which the script did, neatly in a temporary folder. As I typed up this post, looking at my original attempt, I realised I should not have asked for the script to explode the gif, but just grab every second frame from the original. This seemed more logical and perhaps economical, so I asked Claude to take that approach. The final script has been quickly tested and uploaded a gist: gif frame reduction in case anyone would find this useful.
Of course this has added to the pile of not quite formed reflections on AI and should we have anything to do with it. I don’t feel too guilty as I needed at least a little gifsicle knowhow to get started.
While there have always been gullible adults, as a parent and educator, the real issue here is with young people.
I had never considered people would use AI as a therapist, prophet or guru!
Irresponsible AI companies are already imposing huge loads on Wikimedia infrastructure, which is costly both from a pure bandwidth perspective, but also because it requires dedicated engineers to maintain and improve systems to handle the massive automated traffic. And AI companies that do not attribute their responses or otherwise provide any pointers back to Wikipedia prevent users from knowing where that material came from, and do not encourage those users to go visit Wikipedia, where they might then sign up as an editor, or donate after seeing a request for support. (This is most AI companies, by the way. Many AI “visionaries” seem perfectly content to promise that artificial superintelligence is just around the corner, but claim that attribution is somehow a permanently unsolvable problem.)
A good post to read or listen to at the beginning of Scottish AI in Schools week . The article does not want the stable door closed.
Bookmarked for future reading. AI in education is becoming increasingly confusing.
Education Scotland are running a week #ScotAI25: Scottish AI in Schools 2025 with live lessons for pupils & some cpd for staff. I might try to make some of those.
I might have used ChatGPT a couple more times in school. Although it is accessible the login options didn’t seem to be so I’ve no history to check.
Quite a few teachers I know use it in some of these ways in a, like me, fairly causal way. This is a lot easier than thinking about any ethical and moral implication.
Listened to: Learning Conversations Artificial Intelligence with Ollie Bray | Education Scotland podcast
This is the first Education Scotland podcast episode I’ve listened to. Solid food for thought. I’ve not developed any really solid ideas around AI in education but this helped me think of some questions. Ollie compared the uptake and development to AI to other technologies:
So the take up rate of generative AI, like ChatGTP, has been far quicker than people signing up to Facebook, you know, people adopting the internet, people getting a television, people getting radio, etc.
There was discussion of some ways that AI is already being used in schools including what Ollie described as lots of schools doing really, really good work around the ethics of AI.
I wonder what aspects of ethics are being discussed? The one I’ve thought of most is already out of the stable. All the material scraped by AI before we got a chance to choose. I’m not particularly worried about anything I put online being gobbled up by AI, but I imagine it would be more of concern for artists and writers who earn a living from content?
I think we also need to consider the ethics of all application & services we use in education. Especially when application make educational design decisions or have unethical behaviour1.
An interesting point was around developing AI to recreate traditional methods of education, but arguably in more efficient way.
Ollie thinks that is probably missing how do we use the technology to do things that were unimaginable before?
I’ve read a bit about using AI in schools for report writing, analysing pupil data and the like and seen a few educational AI startups offering that sort of service. Most of the teachers I’ve talked to, like myself, have used it in a very basic way, cutting down some time in making a quiz or other classroom resources. We are just using ChartGPT, Copilot. etc in as fairly simplistic way.
The podcast talked about the need to update the Scottish Government’s technologies for learning strategy mentioning that it would take 10 years to bring this to publication. I can see a bit of a mismatch with the speed that technology is developing, especially AI. Can we plan that far ahead?
I used the AI application Aiko to generate the transcript to get the quotes.