Sunday, November 29, 2015

What is 'Digital Writing' anyway?

After I finally got my Alternative CV for Digital Writing Month completed one week late, but now three weeks ago, I fully intended to start participating properly by doing some 'digital writing'. I started writing a blog post containing my thoughts about what 'digital writing' is, in an attempt to find a meaningful justification for the word digital to still be included. Unfortunately, an excessive workload, lack of sleep and the related stress related illness all got in the way and I didn't finish my blog post or take any meaningful further part in Digital Writing Month.

Anyhow, here, right the end of Digital Writing Month, are a few thoughts about what digital writing might be, if it isn't just what we should now call writing.

There is an interesting article by Richard Holden on the OED's public site about the origins and modern use of the word 'digital'. He mentions that we no longer use the term 'digital computer' because that is now the norm for computers and it is the older, but now rare, analog computers which need an adjective for clarity. Likewise he expects words like photography and television default to referring to the digital version - indeed, just a couple of year on from when the article was written, I think they already do, at least here in the U.K. where analog TV broadcasts have ended.

With writing, I feel the shift to digital technology has been both more gradual and more dramatic than the switch with photography and television. Digital photography has changed the way we look at photos to some extent, in that the default is on the computer screen rather than on paper. It is also given ordinary photographers access to photomanipulation techniques which previously would have been only available to professionals or extreme enthusiasts, or just not feasible. However, in practice the main change has been that we take more photographs and that they are more immediate: fundamentally photography has not changed for most of us, although the device used for many people is a phone rather than a camera. With television, digital transmissions have allowed us to have more channels with higher quality pictures. Catch up services like BBC iPlayer mean we can watch programs we missed even if we forgot to video them, and of course the digital format means that when we do video programs the quality is indistinguishable from the original broadcast. However, in practice  most of us till watch television in much the same way, and the change in has less dramatic than the change with photography.

With writing, the start of the digital change started with early word processors in the 1960s, and by the late 70s a lot of professional typing was done on word processors which made two significant changes to the way writing was perceived. It became possible to work on a typed piece of work in a non-linear manner, and easily make changes, so handwritten drafts ceased to be mandatory, and minor errors in finished work became much less acceptable.  Soon after that, personal computers made typing a basic skill, rather than a professional skill, and although those of us have never learned to type properly are usually much slower than the professionals, unlike photography, there is no quality difference between the casual amateur and professional output. Around the same time that personal computers where becoming widespread, Usenet and Gopher provided easy publishing that could reach a wide audience almost instantly.

Up to this point, most of the changes in writing could be considered to be analogues of the changes that come later with photography and television moving into the digital age, but then applications like Hypercard and Toolbook, and then World Wide Web brought the concept of hypermedia out of the academic domain and into widespread use. This was a real change in writing brought about by digital technology, because a piece of text is no longer a single linear piece, but could be part of a much greater non-linear whole. A webpage is much less of an entity than a traditional document, because it can be both linked from and to other parts of the web in such a seamless way that it almost becomes a different page of a single gigantic document.

Other changes that the digital age has given us a much greater ability to write collaboratively, with technology like Google Docs and Hackpad breaking down the traditional barriers to collaboration. Because the most basic formats for digital text such as ASCII or UTF-8 encapsulate almost everything that is important without being complex, it is far easier to share and remix text than other mediums, so cut and paste has become another way in which the digital age has changed writing.

In spite of all this, I still frequently do writing which is not in any way digital, using a pen to make notes that are only for myself. With writing, the change to digital is far less complete (for me at least) than it is with photography and television, however it is also taken much longer. However, at the same time I have never felt a need to describe writing as digital, it just seemed a natural change. The place where I think the digital prefix is justified is when either of the writing process, or the finished work goes well beyond the boundaries of the pre-digital world.

Sunday, October 04, 2015

Walter Bright, Professor Branestawm and me (a #twistedpair post)

At the top of the front page of the personal wiki I use for my work notes, I have a quote from Walter Bright (author of the Zortech/Symantec/Digital Mars C++ compiler and creator of the D programming language) - "Ignore all the people who tell you it can't be done. Telling you it can't be done means you're on the right track." I think this quote, which has been the first thing I see every morning when I sit down at my desk the last few years, has probably subtly had a lot of influence on my approach to supporting teaching. YACRS, my class voting system, certainly started with lots of people telling me I was crazy to try and develop my own system, but is now being used by a lot of teaching staff at the University. However, long before I'd heard of Walter Bright, another great influence on my approach to life, and teaching, was Professor Branestawm, and the wonderful illustrations by W. Heath Robinson that enhanced the first book. Like Professor Branestawm I like to invent and make things, and I like the things I make to be functional and useful rather than just ornamental. (I also like to have lots of pairs of spectacles1.) For me the end result of education is very closely tied to imagination and invention, because whether I am looking at an aeroplane, a biological process or a piece of software2 it is the ability of my imagination to run through how it works that is true understanding, not knowing the words needed to communicate with another engineer or biologist. Deep understanding is a prerequisite for invention, and it is my ability to invent useful bits of software for education that (I believe) shows that I understand both software development and education.

This post was inspired by Steve Wheeler's #twistedpair challenge.

  1. In addition to my every day bifocals I have dedicated pairs of glasses for reading, using computers at home, using computers at work, reading at work, driving in bright sunlight and attending conferences. I do not yet have hunting for lost spectacles spectacles.
  2. I have degrees in Aeronautical Engineering, Bioengineering and Natural Sciences, but am now doing a PhD in Computing Science and working as a Software Engineer.

Thursday, August 27, 2015

Evaluating an online conference system - or not...

Recently I've been asked to evaluate an online conferencing system, with a view to getting it set up on an IT Services server for teaching next semester.

As a software engineer, I am really one of the last people who should be doing software evaluation. Not only am I too expert at using a wide variety of software, but I also understand and work round all the compromises and short-cuts that developers make; after all, I've made them all myself. However, I do sometimes have to evaluate software as part of my job. Most of the time this is quite straight forward as most of the software we deal with consists of fairly simple web applications, and checking that they work usually just means checking that they work on a variety of browsers, including older versions of Internet Explorer.

When asked to evaluate software such as virtual conferencing systems, checking a few different browsers is not sufficient. Those that use the browser depend on plug-ins such as Flash and/or Java. Different variants of the operating system, different graphics cards and different sound cards can all have an impact, so just testing using one version of Windows on the standard University Dell hardware is really not sufficient.

The truth is that we are not really set up for serious software testing in the Learning Technology Unit. We have our standard desktop computers that we use for our day to day jobs, a mix of high specification Windows 7 and Mac OS X workstations, with a lot of different software installed. We also have an elderly pair of tablets – one Android and one iPad available for testing, and our own more modern tablets and phones also get called in when necessary. Realistically we can't say we're certain something is good enough across all the wide variety of systems our students use, but we can sometimes say it is not.

Quite apart from the problems we face doing a proper evaluation, a better approach would have been to come to us with a requirement, and ask what would be a sensible solution. Between us in the LTU we've got a lot of experience doing distance collaboration, as students, teachers and researchers. In my own experience (with IMS working groups, and as an OU student) I've found that the simple solutions work best. Telephone conferencing systems are reliable and have low latency, so are excellent for audio. Telephone conferencing with a shared desktop using VNC worked well in the QTI working group, but a simple shared browsing experience might be more appropriate than VNC with a less technical group. When many of our collaborators were not native English speakers, we found a simple text messaging system worked, and avoided the problem of the native speakers talking too fast in incomprehensible regional accents.

As it is, I don't know what the requirements are other than a rather vague 'online tutorials', I don't know what the students have been told the IT requirements for the course will be, and I don't know what soft of internet access they'll have. In the end though, all these things probably are irrelevant, because rather worryingly, although I've to evaluate this conferencing system, it seems that saying that it has failed the evaluation is not an option…

Also see Sarah's blog post on Vicarious Learning

Wednesday, August 19, 2015

Cross platform development - deciding what framework to use

Over the next year or so I'm expecting to develop a couple of desktop applications which I would like to be available (and both easy and pleasant to use) on different platforms. One of them, which relates to my Ph.D., will be an IDE (Integrated Development Environment) like application which I would like to have running on Windows, Macintosh OS X and Linux operating systems, and the other is an application for teachers, which I think effectively needs to run on both Windows and Macintosh, with Linux support being desirable but not essential.

There are a number of approaches to writing cross-platform applications. One of the easiest of course is just to write a web application, and allow the browser writers to worry about the problems of supporting different operating systems, however web applications do have limitations, a notable one being that it is cumbersome to share files between them and desktop applications, something which users of both my applications will need to do. Another approach taken by a lot of the tools that I have been looking at as part of the background research for my Ph.D. is to make use of an existing IDE. The majority of these tools are built on top of Eclipse, and I suspect it would be a suitable base for my applications, however it also would bring a lot of unnecessary overhead. If I was already using Java, and knew the Eclipse API I'd seriously consider it, however for now I'd rather invest in learning something more versatile - not every cross-platform application I write will look like an IDE.

 As neither of these approaches to cross-platform development suit my requirements, I'll be using a more traditional approach. My preferred alternative is to find a good cross-platform user interface library that works well with a programming language that I already know well, and is compatible with my limited budget and for licensing preferences. A second alternative that I should also consider is to develop separate user interfaces for each platform, with shared code for the underlying application logic, however, as I am expecting to be experimenting with some novel control types, rather than sticking to a standard set of widgets I'd rather avoid developing each user interface separately.

I have been looking at a number of different options, starting with the assumption that I will be doing my development in either C# or C++, can live with an GPL license but would prefer something more permissive, and can get started without spending any money. Rather than jumping right in and trying to develop a fairly complicated application I'm going to initially develop something simple, which shares some characteristics with the applications I'm intending to develop. This will be an editor for wiki style markup, capable of supporting different dialects, and with an inbuilt browser view reading the rendered output. I may end up developing this application several times, to see how different frameworks suit me, however the first one I am going to try out is Eto.Forms.

Over the next few weeks hopefully I will blog about this experiment / learning experience a bit more. I will also try and get round to blogging about the other libraries and frameworks that I have had a quick look at before picking Eto is my first candidate.

Saturday, April 25, 2015

Some thoughts on video lectures

Last week I went to Stirling University's learning and teaching conference. Naturally, many of the same themes that come up at the University of Glasgow conference were also discussed at Stirling. One presentation which I particularly enjoyed was from Edward Moran who has replaced lecturing with recorded videos for one of his postgraduate courses, allowing for an increase in tutorial time. The students found some benefits with this approach, but also some disadvantages. A particular advantage was that non-native  speakers felt there were better prepared for tutorials after the video lectures, where they benefited from being able to repeat sections. However, students also felt that they lost out from not being able to ask questions immediately, and also from not hearing other students questions (which they might not thought of). They also missed the social aspect of the traditional lecture. Edward's students also reported that they found the video lectures less engaging. This made me wonder a bit about how we should be approaching the increased use of video to support distance learning and flip teaching. Edward's videos were of the usual PowerPoint slides with a commentary type, which are very easy to create, but don't really make full use of the medium.

Having been a distance learner (I recently completed a degree with the Open University), I have some opinions about what sort material works. I found that the video parts of my open University course worked well when they were showing something which couldn't be communicated in any other way, however the learning material I liked most was the printed books that formed the basis of the level I and level II courses. I can see the appeal of making video lectures because it is relatively similar to the traditional way of teaching, and doesn't take up much more of the teacher's time, but maybe production of a video lecture should take up more time. After all, it should be reusable for a few years at least.

I've also taken part in a couple of MOOCs that used video, and there I also found that the videos which were just a voice and static slides were hard to concentrate on, however something as simple as smart board style annotation being added to the slides during video can make it much more engaging.

Clearly this is an area which most universities are just getting into, and there is still a lot to learn. I think there is a real need to consider what medium is best to deliver the teaching material, rather than what medium comes closest to the traditional lecture or is easiest to produce. I think we also still have a lot to learn about how to produce effective (and cost-effective) educational video. Of course, the Open University has done a lot of research on this in the past, so maybe all we need to do is go and read the literature.