I just saw What is Digital Literacy? A guest blog from Andy McLaughlin, University of Aberdeen – DigiLearn and a pointer to the discussion on LinkedIn.

But what do we do there – if we go open source or other methods are we giving our young people the skills to enter the workplace .

Ian Gibson

Ian and John, I’d love to hear your take on the idea that Big Tech’s “efficiency agenda” has been the biggest hindrance to digital skill development.

Andy McLaughlin

There is a lot of interesting ideas popping up in the conversation. I joined in, although I don’t really have a clear idea as to where I stand. Quite quickly I reached LinkedIn’s maximum character limit, so though I’d post here and link in there, POSSE style. Here are the rather ragged thoughts I wanted to post:

Of course in Scotland we have access to an Open Source product in the form of WordPress 😉 But I doubt there is much awareness of Open Source generally among my colleagues. As a primary teacher, I need to get my head round hundreds of experiences and outcomes, leaving little time for the reading, never mind the thinking needed in this area.

Open Source is involved in many work places. Some even owned by ‘Big Tech’. Unfortunately Open Source and open technologies (RSS for example) do not have an army of paid and unpaid evangelists in the same way as ‘Big Tech’.

I am not suggesting we should abandon Big Tech, but we should be able to think about the implication

I recently quoted this:

warning parents that although they think they are giving their children access to the internet, they are really giving the internet access to their children.

BBC World Service – The Documentary Podcast, Assignment: Ireland’s phone-free town

Could we replace parents by educators, children by pupils and internet by ‘Big Tech’.

Not sure I fully grok Big Tech’s “efficiency agenda” but to my mind it might be jumping into using tech too far from the base metal? Just a few (20) years ago, I’d start teaching pupils some basic text editing, a wee bit about the difference between bitmap and vector image software before moving on to more complex tools. I think I’d rather see a pupil ‘misusing’ powerPoint or Keynote to make their own creative images than cycling through possibilities in a more sophisticated tool.

I am also open to the idea that a bit of friction in your toolkit might mean to spending a bit more time thinking.

And through experience and practice that you start seeing
all of the different parts, you start seeing all of the different muscle groups, and you learn how to separate them. And this is one of the reasons why it’s not a job of strength necessarily, because when you have a sharp knife that works just right and you know where to place it, the meat just comes apart on its own, all of the different muscles. I mean, I don’t know how graphic I can get — as graphic as you like —
Okay, so you have say two pieces of muscle and inside, in between is the seam, you can cut on the top of the seam, and if you cut just right in two pieces of muscles, you can just pull it apart with your hands.

Martina

Jeremy (@jeremycherfas) this reminds me of Chuang-tzu’s Cook Ting:

I rely on Heaven’s structuring, cleave along the main seams, let myself be guided by the main cavities, go by what is inherently so. A ligament or tendon I never touch, not to mention solid bone. A good cook changes his chopper once a year, because he hacks. A common cook changes it once a month, because he smashes. Now I have had this chopper for nineteen years, and have taken apart several thousand oxen, but the edge is as though is were fresh from the thickness; if you insert what has no thickness where there is an interval, then, what more could you ask, of course there is ample room to move the edge about. That’s why after nineteen years the edge of my chopper is as thought it were fresh from the grindstone.

The Seven Inner Chapters and Other Writings from the Book ‘Chuang-tzu’  by Chuang-tzu, A. C. Graham

A gif of the micro:bit simulator

I just found this post in my drafts, last edit September 2023, the bug mentioned is still there and I think it is worth remembering the creativity shown by my class.

For the past couple of weeks, we have been working on a micro:bit project in class. One of the interesting aspects of working with a diverse group is the need to introduce new ideas and topics to the class, especially when some students already have experience in the area. This year, my class is Primary 5, 6, and 7. The Primary 7s have already had some exposure to micro:bit and other block coding environments.

I decided to focus on the virtual pets project from the makecode site for our first project. In my classroom, we use the micro:bit app on our iPads for coding, and it and the micro:bits has had a couple of useful improvements. Since the last session, the ability to download data to the iPad has been introduced, although we haven’t explored it yet, but I hope to do so soon. Another improvement is a simpler method for resetting the micro:bit when connecting via Bluetooth, making it more accessible for smaller fingers.

I introduced the topic by discussing Tamagotchi pets, which I remember being popular in my class over two decades ago. The virtual pet project is a lot simpler in comparison, with just two features: when a pet is stroked, it smiles, and when it’s shaken, it frowns.

The limited functionality left plenty of room for exploration for the 5 primary 7s with experience. Leaving plenty of time for me to make sure the others managed the basics, linking to. Micro:bit flashing code etc. I did drop the words food and health into the conversation but didn’t go any further.

We ended up having three sessions with the micro:bits and I was delighted with the results. Between them the sevens recalled variables from last year and were off. We ended up with pets needing fed to avoid death, being sick if fed too much and getting annoyed if they were petted too much. The younger pupils managed the basics and extended them in simpler ways, animating chewing and drinking or reacting to different buttons.
One primary six who had previous experience did just as well as the primary sevens, his pet had these features:

  • Sleep
  • Be happy if stroked (press logo)
  • Be sad if shaken
  • Die (wait long enough and don’t feed it)
  • Be sick if fed to much(can be cured using b)
  • Be scared (by making a noise/blowing/filling the red bar 180+)
  • Be fed (using A)
  • Get a health check (A+B)
micro pets on the Banton Biggies

The first three were part of the class instructions, taken from the make code site, the rest were pupil ideas.

Of course lots of mistakes were made along the way, but it was great to see solutions worked out, shared with neighbours and lights go on. Quite a few pupils used wee bits of free time to explore and test ideas completely independently.

On Friday we went to post on our e-portfolios about the work and embed your pets in the blogs. Editing the shortcode to do this is a bit tricky and we also ran into a problem with the simulator not embedding properly. Half the micro:bits were hidden. At first I thought this would be a problem with Glow Blogs, but later investigation showed it to be a problem with the makecode code. We workaround it by embedding the editor rather than the simulator. I do hope the makecode folk sort this out. Since it affects their documentation too I expect they will.

I continue to be a micro:bit fan and will be using them throughout the year, hopefully incorporating it into our makerspace projects too.

Webpage with description and image of a European Goldfinch on a branch against a blue sky.

Image Accessibility Creator · Streamlit

Impressive ai generated descriptions and alt text from images. I found adding even a one word helped.

Via Doug Belshaw: How to easily generate image descriptions and alt text | Thought Shrapnel

After taking a screenshot I fed it back to the webpage (too tempting) and got:

Image Description: A screenshot of a webpage with the header “asuo-ai-labs.streamlit.app” shown in the top center. The screen is divided into two main sections. On the left, under the heading “Image and Details,” there is a section for uploading an image. A placeholder box labeled “Drag and drop file here” or “Browse files” is visible. Below it, there is a thumbnail of an image labeled “IMG_9865.jpeg” which is 2.8 MB in size. The image thumbnail shows a European Goldfinch perched on a budding branch against a clear blue sky. Below the thumbnail, there is a toggle switch labeled “Add details about the image,” which is turned on. A text box is filled in with the word “Goldfinch.”

On the right side of the webpage, there is a section labeled “Image Description.” Below it, a detailed description of the image is provided, as well as a short alt text. The image description reads: “The image features a European Goldfinch perched on a budding branch against a backdrop of a clear blue sky. The Goldfinch displays vibrant plumage with a distinctive red face, a mix of black and white on its head, a warm beige chest, and brownish wings with yellow accents. The branch, adorned with small, white buds and green leaves, stretches horizontally across the bottom right of the image, while the bird is slightly right of center. The serene sky comprises soft, light blue gradients without any visible clouds.”

The alt-text reads: “European Goldfinch perched on a budding branch against a clear blue sky.”

Alt-Text: Webpage with description and image of a European Goldfinch on a branch against a blue sky.

In reply to gratefully inspired by joe jenett.

Hi Joe, glad you are good. Your dedication to linking certainly inspires me!

Thanks for the comment. I think of taking photos more like a notepad than ‘photography’. I use my camera for recall, identifying things and quite often instead of binoculars. I use a bridge camera with a 60x zoom. Always shoot auto. My framing is often pot luck especially zoomed in. I occasionally think I should get a DSLR (or some-such) and a big lens, but I’d never learn to use it and could not really justify the cost.

Scots Pines in the dark

This morning I joined in with the reveil Dawn Chorus Day 2024.

Reveil (2014—) is a collective production by streamers at listening points around the earth. Starting on the morning of Saturday 4 May in South London near the Greenwich Meridian, the broadcast will pick up feeds one by one, tracking the sunrise west from microphone to microphone, following the wave of intensified sound that loops the earth every 24 hours at first light.

I was streaming from Victoria Park, a short walk from our flat. I was on my way by 4:00

I had to change my audio setup a little this year, a new phone put my lightning mic out of action, so I used a h4n recorder as a mic.

It was dark overcast and the ground was pretty wet, but it didn’t rain. All the way to the park the blackbirds were already singing loudly. A shadow fox seemed to be around a bin but no one else was about. There was a bit of traffic on the Expressway making a noise.

Audio setup in the dark. H4N recorder with dead cat, connected to iPhone
I got set up and was streaming by 4:30.

An old sim-less iPhone 6 using my broadcasting phone as a hot point allowed me to listen at the same time. It is quite strange listening to local birds in one ear and ones that are very far away in the other.

Unfortunately although I could hear the main stream the IRC widget on the page didn’t work for me. When I tested it earlier in the 24 hours it was fine, but this morning it wanted a password…

A short clip of my stream.

I could listen and see what was playing, look at the schedule and figure out it might be a little behind. I kept my broadcast running. I think just after 5:30 I heard a shout out for my stream. Unfortunately by that time a helicopter was flying around the park and the result was pretty unpleasant. The broadcast moves quite swiftly on to the next stream.

A clip with helicopter, I think this is what was broadcast, around a minute in!
Dawn view, audio set up with park in background. A path through an avenue of trees.
Dawn View ~ 5:30am

Reveil was a good excuse/motivator to get up and listen for a while. I found it hard to listen and identify any birds locally, while listening to other sites. My bird song knowledge is not good. The old iPhone 6 wouldn’t let me use the Merlin app to sound ID. I am also interested in distributed collaboration so the whole process was good to see. There were some amazing broadcasts form around the world.

Listening back to some of my recordings I can hear just how much traffic and noise pollution there is even with helicopters.

If this is on next year and I join in, I might try going further afield to somewhere with less traffic and thinking about a better second device. I suppose that I could also try getting up and 3:30am all by myself, but I am not sure I’ve got the willpower or my wife would understand.