Well, it's over. I think the mo turned out fairly well! It's too bad about the lighting in this last shot. Unfortunately, it was so dark out today that I had no natural light at all in my livingroom, and so I either had to go with 'too dark' or 'wow, is that what you look like under a flash?' I went with the former.
I'd like to thank everyone who's donated so far; I've managed to raise $260 at this point. Donations are still open, if you haven't had a chance to contribute, and I'm pretty sure they will be at least until the end of the week.
Thanks again everyone for helping out! As promised, a time-lapse movie of the mo growth is forthcoming, but it's a bit of work to get the photos all lined up right, so it may take me a few days to get it out. I'll be sure to post it here when it's done.
Cheers!
Tuesday, November 30, 2010
Thursday, November 11, 2010
Movember Update: Day 11
Movember Mugshot, Day 11 |
I'd like to say a big thanks to all the people who have donated so far! I really appreciate your contribution! For anyone who would like to donate to prostate cancer research, but hasn't had a chance yet, please drop by my personal Movember page; your help will be greatly appreciated!
Escaping Mac Mail RSS
Don't get me wrong, the Mac Mail RSS reader is pretty good. If you've just got the one computer, and don't have any desire to read your RSS feeds when you're away from home, then it's an excellent choice. It's shameful, however, that MobileMe – Apple's service for syncing your data between Mac computers – doesn't even try to sync your RSS feeds between Mail clients.
This lack of ability to keep the RSS feeds in my Mail client on my desktop and my laptop in sync (let alone on my iPad or iPhone) has led me to start looking for a replacement RSS reader. As disappointing as the lack of sync is, I quickly discovered the even more shocking reality that there's not even a way to export my RSS feeds from Mail. Go, look ... you won't find anything... I'll wait.
As it happens, there's this standard, which is useful for moving RSS feed information around, called OPML (Outline Processor Markup Language), which is a subset of XML (eXtensible Markup Language). I'm not a big fan of XML but there are times when it's useful, and in this case several major RSS readers have implemented an OPML import/export so that you can move your RSS feeds around. It would be really handy if Apple implemented this as well.
Unfortunately, Apple's oversights get worse. Not only can you not sync RSS feeds between computers, or export your RSS feeds, you can't even view the URL of your RSS feeds within Mail so that you could manually copy them to another device or program. Go Apple!
This only leaves one option: manually digging through the files that Mail saves out to keep track of your feeds. This is a task that is beyond the average home computer user, which effectively means that most Mac users who start using Mac Mail as their RSS reader are stuck.
Fortunately, I'm not the average home computer user, and I know where to find these files that Apple writes, and I know how to read them .. so I could go in and cut and paste all those URLs into another reader. But I'm too lazy for that, so I wrote a program to do it for me. This wasn't quite as easy as I expected, since reading in the XML plist files turned out to be a bit of a trick. As much as I don't like XML, trying to describe how I hate XML plist (the format Mail uses for these files) would completely derail this post – Apple really screwed the pooch on that one.
Eventually I gave up on the program being super-portable, and just used someone else's library to read in the plist files. I had been hoping that I could post a script here that anyone could save on their Mac and run to generate an OPML file, but the plist debacle meant that other stuff would need to be installed. Oh well.
In the end I got my OPML file, and I've imported it into Google Reader. Several of the iPad RSS readers will import their feeds from Google, which works well enough.
I'm posting my script below, free to anyone for personal use. In order to make it work you'll need to first install two libraries on your system.
Making it Go
In "Terminal" (which you can find in Applications -> Utilities), type the following two commands:
Once you've installed builder and plist, click on this link. It will download an archive file that contains the script. When that's done, in your Downloads window double-click on the "MacRSStoOPML.tar.gz" entry; that will decompress the archive and show you a new Finder window with the archive you downloaded and the script that was in it. Double-click on "MacRSStoOPML" (it'll have a grey box-like icon) to run it. And you're done! You should now have a file called "opml.xml" on your Desktop, which lists all of the RSS feeds you were reading in Mail.
Happy reading!
This lack of ability to keep the RSS feeds in my Mail client on my desktop and my laptop in sync (let alone on my iPad or iPhone) has led me to start looking for a replacement RSS reader. As disappointing as the lack of sync is, I quickly discovered the even more shocking reality that there's not even a way to export my RSS feeds from Mail. Go, look ... you won't find anything... I'll wait.
As it happens, there's this standard, which is useful for moving RSS feed information around, called OPML (Outline Processor Markup Language), which is a subset of XML (eXtensible Markup Language). I'm not a big fan of XML but there are times when it's useful, and in this case several major RSS readers have implemented an OPML import/export so that you can move your RSS feeds around. It would be really handy if Apple implemented this as well.
Unfortunately, Apple's oversights get worse. Not only can you not sync RSS feeds between computers, or export your RSS feeds, you can't even view the URL of your RSS feeds within Mail so that you could manually copy them to another device or program. Go Apple!
This only leaves one option: manually digging through the files that Mail saves out to keep track of your feeds. This is a task that is beyond the average home computer user, which effectively means that most Mac users who start using Mac Mail as their RSS reader are stuck.
Fortunately, I'm not the average home computer user, and I know where to find these files that Apple writes, and I know how to read them .. so I could go in and cut and paste all those URLs into another reader. But I'm too lazy for that, so I wrote a program to do it for me. This wasn't quite as easy as I expected, since reading in the XML plist files turned out to be a bit of a trick. As much as I don't like XML, trying to describe how I hate XML plist (the format Mail uses for these files) would completely derail this post – Apple really screwed the pooch on that one.
Eventually I gave up on the program being super-portable, and just used someone else's library to read in the plist files. I had been hoping that I could post a script here that anyone could save on their Mac and run to generate an OPML file, but the plist debacle meant that other stuff would need to be installed. Oh well.
In the end I got my OPML file, and I've imported it into Google Reader. Several of the iPad RSS readers will import their feeds from Google, which works well enough.
I'm posting my script below, free to anyone for personal use. In order to make it work you'll need to first install two libraries on your system.
Making it Go
In "Terminal" (which you can find in Applications -> Utilities), type the following two commands:
gem install builder gem install plist
Once you've installed builder and plist, click on this link. It will download an archive file that contains the script. When that's done, in your Downloads window double-click on the "MacRSStoOPML.tar.gz" entry; that will decompress the archive and show you a new Finder window with the archive you downloaded and the script that was in it. Double-click on "MacRSStoOPML" (it'll have a grey box-like icon) to run it. And you're done! You should now have a file called "opml.xml" on your Desktop, which lists all of the RSS feeds you were reading in Mail.
Happy reading!
Wednesday, November 3, 2010
Mo Fruits, Mo Flowers, Mo Leaves, Mo Birds - Movember.
That's right, it's Movember again!
For those that don't know, the month formerly known as November has been given over to the goal of raising awareness of prostate cancer, accomplished by men the world over making a commitment to grow new moustaches (a "Mo"). Much like the commitment to run or walk for charity, men commit to growing a moustache for 30 days to raise funds for cancer research. Last year, global participation in Movember raised CDN $47 million!
I'm taking part for the first time this year, and a couple of mornings ago I got myself a nice clean shave. I'm now waiting for enough to grow in to start trying to decide what sort of mo I'll grow!
Hopefully, a few of you will decide to support my face fuzz with a donation, which you can easily do here. In return I'll commit to new growth on my face, and to regularly posting updates here and on my Movember page, which I'm sure will generate much controversy and hilarity.
The Movember movement aims to increase awareness of prostate cancer and to promote screening (testing before detection of any symptoms), for obvious reasons. Unfortunately, here in Ontario not all men have equal access to the PSA test (Prostate-Specific Antigen test). The Ontario Ministry of Health has chosen not to provide the PSA test as an early detection measure as they have with cervical and breast cancer screening. The Ministry web site says it best:
Perhaps some enterprising souls will write to the Minister and encourage her to support prostate cancer screening.
In the mean time, please visit my Movember page; enjoy a few laughs at my expense, and please donate if you can.
For those that don't know, the month formerly known as November has been given over to the goal of raising awareness of prostate cancer, accomplished by men the world over making a commitment to grow new moustaches (a "Mo"). Much like the commitment to run or walk for charity, men commit to growing a moustache for 30 days to raise funds for cancer research. Last year, global participation in Movember raised CDN $47 million!
I'm taking part for the first time this year, and a couple of mornings ago I got myself a nice clean shave. I'm now waiting for enough to grow in to start trying to decide what sort of mo I'll grow!
Hopefully, a few of you will decide to support my face fuzz with a donation, which you can easily do here. In return I'll commit to new growth on my face, and to regularly posting updates here and on my Movember page, which I'm sure will generate much controversy and hilarity.
The Movember movement aims to increase awareness of prostate cancer and to promote screening (testing before detection of any symptoms), for obvious reasons. Unfortunately, here in Ontario not all men have equal access to the PSA test (Prostate-Specific Antigen test). The Ontario Ministry of Health has chosen not to provide the PSA test as an early detection measure as they have with cervical and breast cancer screening. The Ministry web site says it best:
In men without symptoms (screening), PSA testing is not paid for by the provincial health plan. A man can have the PSA test if he is willing to pay for the test himself. However, it is hoped he will make this decision only after discussion with his Health Care Provider.The Ministry's explanation for this is that they don't believe the current PSA test is reliable enough to be used as a screening measure. The statistics they quote say that, of every 100 men over the age of 50 screened, 4 or 5 will have prostate cancer: three will be detected, and one or two will go undetected by the test. That seems like a fairly decent result to me, when one in six men will be diagnosed with prostate cancer in his lifetime, and twelve Canadian men die of prostate cancer daily.
Perhaps some enterprising souls will write to the Minister and encourage her to support prostate cancer screening.
In the mean time, please visit my Movember page; enjoy a few laughs at my expense, and please donate if you can.
Friday, August 6, 2010
Pay-as-you-go iPad Service in the UK
Monday, February 15, 2010
Losing My Memories
Back at the beginning of January a horrible thing happened. It was something that a lot of people fear in this day and age, but which few really believe will happen to them. It happened to me though, and I had to find a way to recover from it. Yes, I lost all of my digital photographs.
The the complete details of how it happened are not terribly germane to the post but the short story is that, while moving to a new computer, for that brief period where a lot of this data existed only on my backup disk, a Windows installer decided it would like to reformat that backup disk for me.
Recovering data from a reformatted drive can be tricky. Without the original filesystem information you need some special tools to even find old files, let alone reassemble them into something recognizable. But, with a bit of work I managed to get all of my images back, and this is the story of how I did that.
The whole recovery story started off with a stroke of luck. I happened to mention the demise of all of my photographs to a friend of mine, and he just happened to know of an incredibly useful tool for recovering my data. He pointed me toward TestDisk by Christophe Grenier. TestDisk is rather badly named I think, because testing is the least of what it can do. One of the key features that made my life far, far easier is its ability to do file type recognition when recovering files.
When the filesystem information from a disk is lost, even if you're able to recover files, you can't always recover the file names; often that information is lost forever. That means that recovered files will typically wind up with some sort of coded file name (usually just a number generated by the recovery program). If you're recovering a very large disk, you can wind up with literally millions of files with completely nondescript file names. It would be completely impractical to try and sort through an entire disk worth of files that way trying to find the pictures.
Fortunately, TestDisk's ability to recognize file types based on the data in the file, rather than the file name, meant that I could tell it to only recover the JPEG images from the disk. This way I wound up with a set of files where I definitely knew the type of each and every file. And it just so happens that all of the digital cameras I've owned work in JPEG.
I knew I was still going to have a problem though. Because this was my backup disk, which contained not only my Aperture database, but also all of my Time Machine data (a MacOS backup tool), what would be recovered in searching for all JPEG images would include all of the pictures in my Aperture database, but also my entire web browser cache, and any other little jpeg images stored on my disk as part of various applications, etc. When the recovery ran, I ended up with a bunch of folders with a little under 35,000 pictures in them. Now what?
Well, the first thing I did was to try to eliminate any duplicate images. Even though that would be a fairly simple script to write, I always google for these sorts of tools before I try to write them myself. Usually, someone else has already written and posted the thing I need, and often it's better than I would have written on the first try. This was just such a case, and I found a great little perl script that would search for and remove all the duplicate images.
That got me down to a little over 20,000 images. Still a lot, but far fewer than I had before.
The next step was to try and separate out the original files downloaded from my cameras from all of the other random images. For that, I did write my own script. I scanned through all of the images to extract the original image date/time from the Exif data, reorganizing the images into directories by the day the picture was taken. If an image had no original date in its Exif data, or no Exif data at all, then I assumed the file was not a photograph (or not one of my photographs) and put it off in a separate directory to be sorted through manually later.
Here's the script I used:
This has left me with about 7,300 images sorted out into directories by the date the picture was taken, and about 13,600 in directories of images with no known shoot date. This is far more manageable! I'll probably still wind up doing a bunch of manual sorting of the images that are left, but now the task is much more approachable than it was in the beginning. It's also possible I could find some other useful piece of Exif data to sort them by.
The the complete details of how it happened are not terribly germane to the post but the short story is that, while moving to a new computer, for that brief period where a lot of this data existed only on my backup disk, a Windows installer decided it would like to reformat that backup disk for me.
Recovering data from a reformatted drive can be tricky. Without the original filesystem information you need some special tools to even find old files, let alone reassemble them into something recognizable. But, with a bit of work I managed to get all of my images back, and this is the story of how I did that.
The whole recovery story started off with a stroke of luck. I happened to mention the demise of all of my photographs to a friend of mine, and he just happened to know of an incredibly useful tool for recovering my data. He pointed me toward TestDisk by Christophe Grenier. TestDisk is rather badly named I think, because testing is the least of what it can do. One of the key features that made my life far, far easier is its ability to do file type recognition when recovering files.
When the filesystem information from a disk is lost, even if you're able to recover files, you can't always recover the file names; often that information is lost forever. That means that recovered files will typically wind up with some sort of coded file name (usually just a number generated by the recovery program). If you're recovering a very large disk, you can wind up with literally millions of files with completely nondescript file names. It would be completely impractical to try and sort through an entire disk worth of files that way trying to find the pictures.
Fortunately, TestDisk's ability to recognize file types based on the data in the file, rather than the file name, meant that I could tell it to only recover the JPEG images from the disk. This way I wound up with a set of files where I definitely knew the type of each and every file. And it just so happens that all of the digital cameras I've owned work in JPEG.
I knew I was still going to have a problem though. Because this was my backup disk, which contained not only my Aperture database, but also all of my Time Machine data (a MacOS backup tool), what would be recovered in searching for all JPEG images would include all of the pictures in my Aperture database, but also my entire web browser cache, and any other little jpeg images stored on my disk as part of various applications, etc. When the recovery ran, I ended up with a bunch of folders with a little under 35,000 pictures in them. Now what?
Well, the first thing I did was to try to eliminate any duplicate images. Even though that would be a fairly simple script to write, I always google for these sorts of tools before I try to write them myself. Usually, someone else has already written and posted the thing I need, and often it's better than I would have written on the first try. This was just such a case, and I found a great little perl script that would search for and remove all the duplicate images.
That got me down to a little over 20,000 images. Still a lot, but far fewer than I had before.
The next step was to try and separate out the original files downloaded from my cameras from all of the other random images. For that, I did write my own script. I scanned through all of the images to extract the original image date/time from the Exif data, reorganizing the images into directories by the day the picture was taken. If an image had no original date in its Exif data, or no Exif data at all, then I assumed the file was not a photograph (or not one of my photographs) and put it off in a separate directory to be sorted through manually later.
Here's the script I used:
#!/usr/bin/perl use strict; use diagnostics; use warnings; use Date::Parse; use File::Find; use Image::ExifTool qw(:Public); use POSIX qw(strftime); my( $source_d ) = '/Users/matt/Desktop/Recovery/jpg/'; my( $base_dest_d ) = '/Users/matt/Desktop/Recovery/jpg-sorted/'; my( $dir_date_format ) = '%F'; my( $file_date_format ) = '%Y%m%d-%H%M%S'; my( $nodate_i, $nodate_d ) = (0, 00); if( ! -d $base_dest_d ) { mkdir $base_dest_d; } if( ! -d $base_dest_d.'NoDate/' ) { mkdir $base_dest_d.'NoDate/'; } sub wanted { my( $source_file ) = $File::Find::name; my( $source_date, $dest_d, $target_f ); unless( -f $source_file ) { return; } unless( $source_file =~ /\.jpg$/ ) { return; } my( $info ) = ImageInfo($source_file); if( $info->{DateTimeOriginal} ) { $source_date = str2time($info->{DateTimeOriginal}); $dest_d = $base_dest_d . strftime($dir_date_format, localtime($source_date)); $target_f = strftime($file_date_format, localtime($source_date)); # in addition to naming the file by date, give the image an index # number that advances if there is more than one image with the same # date+time my( $target_i ) = 0; while( length($target_i)<2 ) { $target_i = '0'.$target_i; } while( -f $dest_d.'/'.$target_f.'-'.$target_i ) { $target_i++; while( length($target_i)<2 ) { $target_i = '0'.$target_i; } } $target_f = $target_f.'-'.$target_i; } else { # images with no date/time get put into subdirs, 100 images per # directory to keep the directory from getting too large $nodate_i++; if( $nodate_i > 100 ) { $nodate_i = 1; $nodate_d++; } while( length($nodate_d)<3 ) { $nodate_d = '0'.$nodate_d; } $dest_d = $base_dest_d . 'NoDate/' . $nodate_d; $target_f = $_; } if( ! -d $dest_d ) { mkdir $dest_d or die "failed to create dest dir $dest_d: $!"; } my( $final_file ) = sprintf( "%s/%s", $dest_d, $target_f ); printf "%s: %s: %s\n", $_, $info->{DateTimeOriginal} || 'NoDate', $final_file; link( $_, $final_file ) or die "failed to link files $_:$final_file: $!"; } find(\&wanted, $source_d );
This has left me with about 7,300 images sorted out into directories by the date the picture was taken, and about 13,600 in directories of images with no known shoot date. This is far more manageable! I'll probably still wind up doing a bunch of manual sorting of the images that are left, but now the task is much more approachable than it was in the beginning. It's also possible I could find some other useful piece of Exif data to sort them by.
Tuesday, February 9, 2010
So Easy, an Adult Can Understand!
The Internet, explained to your mom.
That's how I'd describe this video from EuroIX, an association of European Internet Exchange Points (IXPs). Don't worry if you don't know what an IXP is.. the video explains.
I was all set to blog about this months ago, but then they took the video down in order to update and clarify a few things. It's back up now, and so you get to enjoy its informational goodness.
That's how I'd describe this video from EuroIX, an association of European Internet Exchange Points (IXPs). Don't worry if you don't know what an IXP is.. the video explains.
I was all set to blog about this months ago, but then they took the video down in order to update and clarify a few things. It's back up now, and so you get to enjoy its informational goodness.
Monday, February 8, 2010
Friday, February 5, 2010
Safe Driving in the Home
A creatively conceived and beautifully shot PSA, from the Sussex Safer Roads Partnership in the UK.
Wednesday, February 3, 2010
Share The Pain
Ze Frank is working on yet another cool collaborative, creative project. If you're into music creation at all, consider contributing to the Pain Pack. I've just downloaded the audio sources and will be having a listen to see if I have any inspiration of my own.
Tuesday, February 2, 2010
Motion Doesn't Stop
An incredible stop-motion piece. The time and planning (and rehearsal!) that must have gone into this is mind-boggling.
Tuesday, January 5, 2010
Old Money for a New Year
On New Year's Eve, as I was taking a cab to the train station to pop out to London to spend the night socializing at a friend's place, my cab driver leaned over while at a stop light and handed me a small bill about the size of Canadian Tire money. "Here, take this, it will give you luck!" he said. "And Happy New Year!"
As I would discover, the bill was 100 Iranian Rials, worth just slightly more than 1¢ Canadian. Certainly not a large sum of money. But I appreciate the notion of handing out cash to people as a sign of good luck. We arrived very quickly after that at my destination, so I didn't get a chance to get an explanation of whether this was some sort of typical tradition, or if this was just his own idea and a way to dispose of some currency he couldn't possibly convert, but that wouldn't really change anything anyway.
Partying With the Greats in Space
I won't be able to make this, but I thought I'd put it out there in case anyone on the West coast is interested. The Planetary Society will be hosting an evening on the 15th with Stephen Hawking, who will be receiving the Cosmos Award, and Buzz Aldrin, who is celebrating his 80th birthday.
Tickets will go quickly.. so jump now if you're interested!
Labels:
Buzz Aldrin,
Cosmos,
Planetary Society,
space,
Stephen Hawking
Subscribe to:
Posts (Atom)