Replied to Command Line — The MagPi magazine by Aaron DavisAaron Davis (collect.readwriterespond.com)
MagPi / RaspberryPi put together a guide to getting going with command line.

Hi Aaron,
This is a useful guide. I remember  Oliver Quinlan, a guest on Radio EDUtalk talking about the eloquence of the command line compared to pointing and grunting.
I enjoy using the command line, often with Raspberry PIs, but it is easy to miss some of the basics which this guide covers well.

After seeing @adders on micro.blog posting some timelapse I though I might have another go. My first thought was to just use the feature built into phone. I then though to repurpose a raspberry pi. This lead to the discovery that two of my PIs were at school leaving only one at home with a camera. This we zero had dome sterling service taking over 1 million pictures of the sky and stitching them into 122918 gifs and posting them to tumblr. I decommissioned that when tumblr started mistaking these for unsuitable photos.

My first idea were just write a simple bash script that would take a pic and copy it to my mac. I’ve done that before, just need to timestamp the image names. Then I found RPi-Cam-Web-Interface. This is really cool. It turns your pi into a camera and a webserver where you can control the camera and download the photos.

I am fairly used to setting up a headless pi and getting on my WiFi now. So the next step was just to follow all the instructions from the RPi-Cam-Web-Interface page. The usual fairly incomprehensible stuff in the terminal ensued. All worked fine though.

I then downloaded the folder full of images onto my mac and stitched them together with ffmpeg.

ffmpeg is a really complex beast, I think this worked ok:

make a list of the files with

for f in *.jpg; do echo "file '$f'" >> mylist.txt; done

then stitch them together:

ffmpeg -r 10 -f concat -i mylist.txt -c:v libx264 -pix_fmt yuv420p out.mp4

I messed about quite a bit, resizing the images before starting made for a smaller move and finally I

ffmpeg -i out.mp -vf scale=720:-2 outscaled.mp4

To make an even smaller version.

I am now on the look out for more interesting weather or a good sunset.

graph of number twitter clients used by schools

I’ve talked to a fair number of teachers who find it easier to use twitter than to blog to share their classroom learning. I’ve been thinking a little of how to make that easier but got side tracked wondering how schools, teachers and classes use twitter.

If you use twitter on the web it tells you the application used to post the tweet. At the bottom of a tweet there is the date and the app that posted the tweet.

I’ve got a list that is made up of North Lanarkshire schools I started when I was supporting ICT in the authority.

I could go down the list and count the methods but I though there might be a better way. I recalled having a played with the twitter api a wee bit so searched for and found: GET lists/statuses — Twitter Developers. I was hoping ther was some sort of console to use, but could not find one, a wee bit more searching found how to authenticate to the api using a token and how to generate that token. Using bearer tokens

It then didn’t take too long to work out how to pull in a pile of status updates from the list using the terminal:

curl --location --request GET 'https://api.twitter.com/1.1/lists/statuses.json?list_id=229235515&count=200&max_id=1225829860699930600' --header 'Authorization: Bearer BearerTokenGoesHere'

This gave me a pile of tweets in json format. I had a vague recollection that google sheets could parse json so gave that a go. I had to upload the json somewhere I could import it into a sheet. This felt somewhat clunky. I did see some indications that I could use a script to grab the json in sheets, but though it might be simpler to do it all on my mac. More searching, but I fairly quickly came up with this:

curl --location --request GET 'https://api.twitter.com/1.1/lists/statuses.json?list_id=229235515&count=200&' --header 'Authorization: Bearer BearerTokenGoesHere' | jq '.[].source' | sed -e 's/<[^>]*>//g' | sort -bnr | uniq -c | sort -bnr

This does the following:

  1. download the status in json format
  2. passes it to the jq application (which I had installed in the past) which pulls out a list of the sources.
  3. It is then passed to sed which strips the html tags leaving the text. (I just search for this, I have no idea how works)
  4. next the list is sorted
  5. then uniq pulls out the uniq entries and counts then
  6. Finally sorts the counts and gave:
119 "Twitter for iPhone"
  28 "Twitter for Android"
  22 "Twitter Web App"
   8 "Twitter for iPad"
   1 "Twitter Web Client"

This surprised me. I use my school iPad to post to twitter and sort of expected iPads to be highest or at least higher.

It maybe that the results are skewed by the Monday, Tuesday holiday and 2 inservice days, so I’ll run this a few times next week and see. You can also use a max_id parameter so I could gather more than 200 (less retweeted content) tweets.

This does give me the idea that it might be worth explaining how to make posting to Glow Blogs simpler using a phone.

Update, Friday, bacn to school and NLC looks like:

 74 "Twitter for iPhone"
  51 "Twitter for iPad"
  18 "Twitter for Android"
  10 "Twitter Web App"
   1 "dlvr.it"

I liked the Pummelvision service so when it went I sort of
made my own. Which lead to this:
Flickr 2014 and DIY pummelvision and 2016 Flickring by.

I went a little early this year:

I’ve updated the script (gist) to handle a couple of new problems.

  1. Some of my iphone photos were upside down in the video as ffmpeg doesn’t see the Rotation EXIF. I installed jhead via homebrew deal with this.
  2. I installed sox to duplicate the background track as I took more photos and slowed them down a bit this year.

I have great fun with this every time I try it, I quite like the results but the tinkering with the script is the fun bit. I sure it could be made a lot more elegant but it works for me.

A couple of years ago I made a video of all my flickr videos in the style of the now dead pummelvision service.

I dug out the script tidied it up a little, and made the above video with my 2016 photos.

I uploaded the script in the unlikely event that someone else would want to do something like this. It is not a thing of beauty, I am well out of my depth and just type and test. The script need ffmpeg on your computer (I’d guess mac only as it used sips to resize images) and a Flickr API key.

The script also leave you with up to 500 images in a folder. Before I deleted them I made a montage and averaged them using imageMagick

montage -mode concatenate -tile 25x *.jpg out.jpg which is the featured image on this post.

and

convert *.jpg -average aver.jpeg

aver

I guess all that the average proves is that most of my photos are landscapes, given the hit of a sky…

‘Points & grunt’ or ‘eloquently instruct’

A couple of weeks ago Oliver Quinlan was a guest on Radio EDUtalk. The thing that stuck in my mind the most from the episode was this idea. Oliver has now written a bit more about it on his blog.

The command prompt allows you to use the power of language to interact with a computer. In comparison, clicking around in a  desktop environment is akin to pointing and grunting. Getting people to do things by pointing and grunting is OK at first, but as children we naturally put in the effort to learn how to move beyond this to get things done quicker, more precisely and more elegantly.

‘Points & grunt’ or ‘eloquently instruct’ – Language & computers – Oliver Quinlan

I’ve often struggled to explain, even to myself, why I enjoy using the terminal application. This is the best elevator pitch I’ve heard.

I am no command line expert, but I end up using it for small things or interesting experiments most days. I guess my first exposure was on the introduction of Mac OS X in 2001. Af first it was something to use occasionally for system settings that could not be done in other ways. Slowly over the last 15 (eek!) years I’ve used it a bit more and slowly learned. It is not something you need to be an expert to get use from. For example Batch Processing MP3 files is probably not eloquent but it saved me a huge amount of time.

For most of the time I’ve been using the terminal I though of it as a somewhat old fashioned process. It is now fairly obvious that it will be in use for some time yet. This week the news that Microsoft is bringing the Bash shell to Windows 10 brought that home.

It is worth mentioning that there is an amazing amount of information on using the command line on the web. I can’t remember when a search has failed to help me learn.

Elsewhere Oliver recommended Conquer the Command Line as a good resource to getting started. From the MagPi Magazine available as a free PDF.

 

Featured image: my own, grabbed with LICEcap.

twitter-lists-resized

Another interesting idea from Alan. I read his post: Measurement or [indirect] Indicators of Reputation? A Twitter List / Docker / iPython Notebook Journey and then Amy’s List Lurking, As Inspired by Alan Levine.

The idea is that you can find out something about a person/yourself by the twitter lists they are listed in.

Alan went down a nice rabbit hole involving Docker & iPython. This seemed as if it might be a mite tricky. I think I’ve messed up my mac’s python setup by trying to get iPython Notebooks working before. Alan’s approach is a lot more sensible, I hope to re-visit it later. In the meantime I though I would try out something a little simpler. This approach is simple sorting and manipulating a text files. Mostly with, in my case, TextMate’s sorting and a bit of bash in the terminal.

So:

  1. I went to the list on twitter and copied all of the text on the page.
  2. Pasted that into a text document
  3. Manually cleaned up the bits above and below the list (a couple of selections and backspace)This produced a list that repeated the following pattern:
    • Name of list by Name of lister
    • Subtitle/description of list, sometime not there
    • Number of Members
  4. I sorted the list. This grouped all of the lines with number Members together, a couple of lists that started with or a number above.
  5. Select all the member lines and delete
  6. there were a lot of lines Visit http://twibes.com/education/twitter-list to join the top education Twitter people as a description so easy to delete them too.
  7. I saved this file as a file list1.txt
  8. What I was looking for was the lines that were lists names not descriptions, and I wanted the lists rather than the names of the people who made the lists. So I made the lists into two columns by replacing by with a TAB and saved the file.
  9. We then sort the list by the second column using the terminal sort -k 2 -t $'\t' list1.txt > list2.txt 1 As the second column is empty those lines float to the top and can easily be deleted.
  10. Next we cut the first column out which gives me a list of the list names: cut -f 1 list2.txt | sort > list3.txt

So I now have a list of the the twitter lists I am a member of. I can use that in wordle.net to get a word cloud. I made a few, removing the most popular words to see the others in more relief. I’ve tied them together in a gif at the top of this post.

Amy’s approach was to look for interesting list name, here are some of my favourites (I’ve added descriptions when they are there):

  • awesome rasbperrypi peopl
  • audiophiles
  • Botmakers: Blessed are the #botALLIES
  • Digital cool cats: Digital humanities/learning tech/cool stuff peeps
  • People I met through DS106
  • not to be messed with
  • Coolest UK Podcasters
  • Very funky Ed Blogs

Of course these are not the most numerically but they are, to me, the most flattering;-)

On this 10th birthday of twitter you might enjoy a quick browse through the name of the lists you are a member of.

Update
Sleeping on this post I’ve had a few more thought.

Of course after the step where I replaced the word by with a tab I could have pasted the text into excell or numbers and taken it from there rather than using the commandline.

I woke up this morning thinking about Alan’s post and using docker to run iPython notebooks and had a mini revelation. I’ve often ran into trouble and messed up, at least short term, my computer. Trying things that I don’t really understand. I remember one instance where I got into a right mess with iPython by blindly installing.

Running things in a virtual machine would have a great advantage here. Likewise I’ve had things break after a system update. I think, going forward, when doing things above my pay grade I’ll change my approach a bit. I am now wondering why I was trying to get the iPython thing running in the first place.

Overall I’d have learnt a bit more by following Alan’s recipe directly. There is also the json think he turned away from, could be an interesting rabbit hole…

1. sort -k 2 -t $'\t' list1.txt > list2.txt THis sorts by the second column, k, key and uses a tab, $’\t’ to separate the columns