I’ve been thinking of my approach to the daily create. At the start of the month I was loosely connected to the Reclaim Open 2025 conference via Combobulating where a few of my posts here were combobulated with others as a way of talking about ds106 as part of a wild web.

I didn’t manage to take part as much as I liked, but had some fun, and thought a bit about the daily create. I don’t take the daily part very seriously at the moment. My contributions are often old things I’ve had on my hard drive or recycled attempts at creates gone by.

Strangely this week I’ve done more than usual. But two were recycled, one was a photo and one was a quick image edit.

Today I made more of an effort, I’d looked at the prompt in the morning and it didn’t click with me. Then I saw Kevin’s toot:

dial in a daily call us and we will inspire you to create or to remix; or maube it will be an invitation to write a story or a poem; or perhaps a call to share a photo or a piece of art. the unexpected is part of the appeal.
call today to get inspired.

Which made me think. I wonder if I could do something like the original project without any of the really hard or expensive bits. Maybe a webpage that would speak a random Daily Create? I did a bit of combobulating of some ideas and things I’d found and stored.

  • I knew that the daily create runs on WordPress and that you can access WordPress posts via an api. I’ve played with that before. So I just tested the endpoint to the posts in Firefox as it renders json nicely.
  • I thought I recalled that JavaScript can do text to speech so I searched for more information and found a nice, simple example.
  • I copied a very simple php cache I used a while back and adapted it to pull down the posts from the daily create.
  • I copied some code from an example from Tom Woodward to get me started. Pointing it at the php which would get the daily’s once a day and hence be less of a strain on that site and speed things up.
  • After looking for some phone images I decided to go mobile with a crude div with rounded corners.
  • Working on my raspberry pi meant I could edit and update quickly so just bashed through, borrowing and adapting some JavaScript from the speech example and Tom’s code. Some css from the speech example. The meta tags from a previous daily create. Since I had got the content of the daily posts I added a view of those too.

I came up with this: TDC 5054 Phone DS106, which reads out a random daily create challenge.

Given I’d already run way over the idea of doing tdc is 15 minutes I stopped quite quickly. There are a lot of things I could improve.

  • Proper colour change on the button, handing up a call you do not like.
  • Not loading another till the first was finished or hung up.
  • And maybe a text button to reply to the create on mastodon.
  • Some error checking & tidy code 😉
  • A calendar view of the creates would have been cool.
  • Make it nicer looking, maybe go with a tin can telephone metaphor.
  • Is a nicer voice possible?

But life is short, I’ve learnt a bit, had some fun and perhaps I’ll get a like or two.

This sort of thing, where I take the daily create in a different direction, make it into a couple of hours play, practise some “skills” and think a bit, is my favourite type of daily create. And because the rules of DS106 are flexible & porous I feel “successful”.

Update, while I was writing this Alan added it to the Daily Create site menus. Adds a bit of pressure to keep the pi running and maybe tackle some of the improvements.

Featured image public’s domain from Wikimedia Commons.

Ai generated picture of a AI bot talking to a human. Turned into a cubomania gif

A couple of days ago I saw a “guess the cubomania” challenge from Theo. I’ve had an interest in Cubomania in the past and played around with the idea a bit. After a chat with D. who gave me a few engravers I googled a bit and guessed, wrongly, Goya.

Next I thought to ask ChatGPT. It suggested it could match by image matching techniques, gave me a fairly obviously wrong first row and ran out of credit.

I then thought to ask Claude to make me an interactive page where I could drag things around. It made a couple of not very good attempts.

I was thinking about a better prompt, when I remembered and asked:

Could we use the whole image for each piece but ‘crop’ it with css?

Claude replied:

Brilliant idea! Yes, we can absolutely use CSS to create a “window” effect where each piece shows only its portion of the full image. This is much more elegant than trying to extract individual pieces.​​​​​​​​​​​​​​​​

I was flattered1 and when Claude came up with another fail I decided to abandon AI and DIY. This turned out a lot better. I started by remembering background-position and finding interact.js . The last time I did any drag and drop I dimly recall some sort of jQuery and a shim for mobile/tablets. interact.js did a grand job for my simple needs. It was probably overkill as it seems to do a lot more.

Cubomania Solver

Partially completed sliding tile puzzle on a yellow background, featuring black and white sketch-style artwork. Some tiles are in place forming parts of faces and figures, while others are missing or scattered around the screen.
Screenshot

It is pretty simple stuff, but potentially a lot of fun, different images, making cubomania puzzles who knows. I did extend it a bit, learning about localStorage (to save any progress) and the dialogue tag. All without AI but few visits to HTML reference – HTML | MDN and the odd search.

I had a lot of fun with this, more than if I had just managed to get either of the AIs it to do the whole thing. What it did make me think of is that AI chat was useful for working out what I wanted to do and how to do it. I could probably have done that bit too all by myself. Usually I just start messing about and see what happens. This points to a bit of planning, or maybe typing some notes/pseudocode/outline might work for me when I am playing.

  1. See: The machine began to waffle – and then the conductor went… In the paper the title was Artificial Intelligence: The Technology that lies to say yes. ↩︎

The Featured Image of this post was generated by ChatGPT in response to ” I want an image of a chatbot character chatting with a person, friendly, helpful & futuristic.” It has been run through Cubomania Gif!

A gif of the terminal running videogrep

I’ve followed the #ds106 daily create for quite a few years now. The other day the invite was to use PlayPhrase

PlayPhrase will assemble a clip of movie scenes all having the same phrase, a small supercut if you will.

The results are slick and amusing.

I remember creating a few Supercuts using the amazing Videogrep python script. I thought I’d give it another go. I’ve made quite a few notes on using Videogrep before, but I think I’ve smoothed out a few things on this round. I thought I might write up the process DS106 style just for memory & fun1. The following brief summary assumes you have command line basics.

I decided to just go for people saying ds106 in videos about ds106. I searched for ds106 on YouTube and found quite a few. I needed to download the video and an srt, subtitle, file. Like most videos on YouTube there are not uploaded subtitles on any of the ds106 videos I choose. But you can download the autogenerated subtitles in vtt format and convert to srt with yt-dlp. The downloading and subtitle conversion is handled by yt-dlp2.

I had installed Videogrep a long time ago, but decided to start with a clean install. I understand very little about python and have run into various problems getting things to work. Recently I discover that using a virtual environment seems to help. This creates a separate space to avoid problems with different versions of things. I’d be lying if I could explain much about what these things are. Fortunately it is easy to set up and use if you are at all comfortable with the command line.

The following assumes you are in the terminal and have moved to the folder you want to use.

Create a virtual environment:

python3 -m venv venv

Turn it on:

source venv/bin/activate

Your prompt now looks something like this:

(venv) Mac-Mini-10:videos john$

You will also have a folder venv full of stuff

I am happy to ignore this and go on with the ‘knowledge’ that I can’t mess too much up.

Install Videogrep:

pip install videogrep

I am using yt-dlt to get the videos. As usual I am right in the middle when I realise I should have updated it before I started. I’d advise you to do that first.

You can get a video and generate a srt file form the YouTube auto generated:

yt-dlp --sub-lang "en" --write-auto-sub -f 18 --convert-subs srt "https://www.youtube.com/watch?v=tuoOKNJW7EY"

Should download the video, the auto generated subtitles and convert them to a srt file!

I edit the video & srt file names to make then easier to see/type

Then you can run Videogrep:

videogrep --input ds106.mp4 --search "ds106"

This makes a file Supercut.mp4 of all the bits of video with the text ‘ds106’ in the srt file.

I did a little editing of the srt file to find and replace ds-106 with ds106, and ds16 with ds106. I think I could work round that by using a regular expression in videogrep.

After trying that I realised I wanted a fragment not a whole sentence, for that you need the vtt file: I can dowmnload that with:
yt-dlp –write-auto-sub –sub-lang en –skip-download “https://www.youtube.com/watch?v= tuoOKNJW7EY”

Then I rename the file to ds106.vtt delete the srt file and run

videogrep --input ds106.mp4 --search "106" –search-type fragment

I shortened ds106 to 106 as vtt files seem to split the text into ds and 106.

I ended up with 4 nice wee Supercut files. I could have run through the whole lot at once but I did it one at a time.

I thought I could join all the videos together with ffmpeg, but ran into bother with dimensions and formats so I just opened up iMovie and dragged the clips in.

at the end close the virtualenv with deactivate

reactivate with

source venv/bin/activate

This is about the simplest use of videogrep, it can do much more interesting and complex things.

  1. I am retired, it is raining & Alan mentioned it might be a good idea. ↩︎
  2. I assume you have installed yt-dlp, GitHub – yt-dlp/yt-dlp: A feature-rich command-line audio/video downloader. As I use a Mac I use homebrew to install this some other command line tools. This might feel as if things are getting complicated. I think that is because it is. ↩︎

Likes Bop Spotter by Riley Walz.

installed a box high up on a pole somewhere in the Mission of San Francisco. Inside is a crappy Android phone, set to Shazam constantly, 24 hours a day, 7 days a week. It’s solar powered, and the mic is pointed down at the street below.

What a great idea. Webpage looks super too. via jwz.

ipod classic screen with Radio Sandaig podcast episodes listed.

I found my old iPod last night, took a while to get it to boot, but I recorded a microcast just for nostalgia. I use this quite a lot around 2005-9 to record podcasts with my primary classes. There seem to be some interesting crackles added this time.

Suprisingly it mounted on my mac, I could drag the wav file to the desktop and convert to mp3, no other editing.
Continue reading

A montage of phones lock screen showing photos of nature

I don’t usually pay a lot of attention to new features when an OS updates nowadays. But the other day I discovered the “photo shuffle” Lock Screen feature on my phone. Now every time I unlock my phone I see another random image. I picked nature as the subject. I am not sure what algorithm is picking the photos but the results are delightful.