Archive Page 3

Transfer iPhoto from a mac on Tiger to a mac on Leopard

I recently bought a mac mini with Leopard (mac os x 10.5.6). I wanted to install my iPhoto library from my old (already 5 years old) powerbook running on mac os x 10.4. How to do?

Well, I tried to use the migration assistant. Unfortunately, the user profile on the powerbook and on the mac mini are different (different user name). The migration assistant simply created on the mac mini a profile for the powerbook user profile I was importing. In a word, the migrated photo library was not available on my mac mini profile. How sad. To have the iPhoto database running, I simply copied the iPhoto library to my profile, updated the user permissions (to have read/write permissions on all files) and ran iPhoto.

Caution: this works, but would overwrite any images that would be already imported in your new computer!!!

Step by Step (sorry, snapshots are done in french…):

  • files to import are in //hd/users/your-profile/Library/images/iphoto Library
  • copy those files in your new computer, at the same place. CAUTION: this will destroy any already existing files!!!!
where
  • On the “iphoto Library” directory, right-click, information, 
  • run iPhoto !

permissions

Advertisements

Set up wifi on a msi wind notebook

I’ve just bought a MSI Wind U100, running Linux SLED (Suse Linux Enterprise Desktop). Unfortunately, this distribution is based on SUSE 10.1, which is out of date (current SUSE version is 11!!), making the installation of any application a nightmare: most dependencies can’t be solved since most librairies are not maintained any more for 10.1.

I decided to install Ubuntu 8.10 (Intrepid Ibex). The WIFI chipset of my msi wind U100X (bought on December 2008) is not supported. Mine is a RaLink. Use lspci command to know yours: lspci:

01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)

02:00.0 Network controller: RaLink Device 0781

The driver for RaLink 0781 is rt2860. After some search, here is how I installed the driver:

I installed first build-essential:

sudo apt-get install build-essential

Then I installed the driver itself:

http://liveusb.info/ralink/rt2860sta-dkms_1.8LK_all.deb

The installation starts automatically after package download


install_rt2860

I rebooted the computer to be sure that all services were started, and it works: I can see all my neighbors wifi!

wifi

Installer le wifi sur MSI Wind netbook U100

Je viens d’acheter un MSI Wind U100, sous Linux SLED (Suse Linux Enterprise Desktop). Malheureusement, cette distribution est basée sur SUSE 10.1, totalement dépassée (SUSE est à la version 11). Installer une application devient vite un cauchemar dès qu’il faut résoudre des dépendances, car la plupart des librairies ne sont plus maintenues.

J’ai donc installé Ubuntu 8.10 (Intrepid Ibex). Le Wifi de mon MSI Wind U100x (acheté en décembre 2008) n’est pas pris en charge par défaut. Ma carte est une RaLink comme le montre la commande lspci:
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)
02:00.0 Network controller: RaLink Device 0781

Le driver de la RaLink 0781 est rt2860.
Après quelques recherches, voici comment j’ai procédé pour installer ce driver:
J’ai commencé par installer build-essential:

sudo apt-get install build-essential

Puis j’ai installé le driver pour ma carte:
http://liveusb.info/ralink/rt2860sta-dkms_1.8LK_all.deb;
L’installation se lance automatiquement au téléchargement du package.
install_rt2860

J’ai rebouté la machine pour être sûr que tous les services seraient relancés, ça marche: j’ai la liste des wifi de l’immeuble!
wifi

DIY backup system

I’ve got tons of satellite images and GIS data that I don’t want to loose. But hard drives are not eternal.

My home made backup system is a simple external usb2 hard drive: PC hard drive are becoming cheap, and you can by for some bucks an usb2 external case to plug it in. Now, the question is only how to manage that.

On any *nix system (Unix, Linux, Mac OS X, …) you can use rsync to make a fast backup of your repositories.
My backup hard drive is mounted on /Volumes/Archive on my Mac.
I wrote a very simple bash script:

#!/bin/sh -l
rsync -E -a -x -S --delete --progress --exclude-from=/Users/bubuitalia/exclude_from_rsync.txt /Users/bubuitalia /Volumes/archive/save

This script was saved in my home directory (/Users/bubuitalia) as backup.sh (do not forget to do a chmod u+x backup.sh this script to make it executable). You just have to change the path for your own installation. To run it, type ./backup.sh

This rsync command synchronizes the data in /Users/bubuitalia/ with the archive directory (/Volumes/archive/save).
The rsync commands line has the following options:

  • -E : copies extended files attributes.
  • -a : archive mode
  • -x : don’t cross file system boundaries (omits all mount-point directories from the copy)
  • -S : try to handle sparse file in an efficient way
  • –delete : delete extraneous files on the receiving system: if you delete something on your original data set, it will be deleted on the archive at the next synchronization. Use this function if you want to maintain a mirror copy of your system. It is worth to use it to avoid your archive size to get too large with time.
  • –progress : show progress during transfer.
  • –exclude-from=FILE : read exclude pattern from FILE

Don’t forget: if you run the script, any change in the original data will be applied to the backup. So, if you delete a file and want to restore it, do not run the script! First retrieve the data from the archive (any data deleted on the source, will be deleted on the archive after each synchronization).

I made another file, /Users/bubuitalia/exclude_from_rsync.txt, where I listed (1 entry per line) the directories I don’t want to save:

Music/
Movies/
Library/Caches/
.Trash/

To adapt this example to your own system, simply change the source and target directories.
The rsync page is on http://rsync.samba.org/
and you can find some other examples on http://rsync.samba.org/examples.html

Some tips on HDF5 files

In general, my users ask me to export HDF formatted images into something directly usable with GIS desktop software (Geotiff, Erdas Imagine, etc.)

The problem with HDF is that it is not an image format but a data container format: it’s very general, can contain any type of object (variables, arrays, images…). The best way to handle this format is to write some lines of code to browse the file internal table storing the meta information.

From the command line, you can use gdalinfo to get some meta-information. The meta information can be more or less complex depending on what was stored in the HDF file.
Let’s consider an HDF5 file, which metainformation would be

Driver: HDF5/Hierarchical Data Format Release 5
Size is 512, 512
Coordinate System is `'
Subdatasets:
SUBDATASET_0_NAME=HDF5:"HDF5_LSASAF_MSG_FVC_SAme_200806100000"://FVC
SUBDATASET_0_DESC=[1511x701] //FVC (8-bit integer)
SUBDATASET_1_NAME=HDF5:"HDF5_LSASAF_MSG_FVC_SAme_200806100000"://FVC_QF
SUBDATASET_1_DESC=[1511x701] //FVC_QF (8-bit character)
SUBDATASET_2_NAME=HDF5:"HDF5_LSASAF_MSG_FVC_SAme_200806100000"://FVC_err
SUBDATASET_2_DESC=[1511x701] //FVC_err (8-bit integer)
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 512.0)
Upper Right ( 512.0, 0.0)
Lower Right ( 512.0, 512.0)
Center ( 256.0, 256.0)

In HDF you can store different types of data in the same file, the Size information for the file is not meaningful in this case (here it is written that Size is 512, 512, which is wrong since the actual size of the images is given on the lines with the SUBADATASET_0_DESC.

The image size given in the header is not meaninful: the images are 1511×701 lines, as indicated in the line SUBDATASET_0_DESC and not 512×512 in the header. The same SUBADATASET_0_DESC line gives you the file type.

The example above is about an HDF5 image, but FWTools can also handle HDF4 images.

Now, to export the image to something easier to handle, you must give the dataset name to gdal_translate, not the hdf5 file name:

gdal_translate -of gtiff HDF5:"HDF5_LSASAF_MSG_FVC_SAme_200806100000"://FVC fvc.tif

to export the data set named FVC into a single image.

About image files internal compression

Did you know that some image file formats support internal compression? This means that the data can be compressed inside the file, the decompression being done on the fly when you open the image in a software, the task being done by the image driver. By using internal compression, you do not need to zip your files to save space on disk, and no file is created when you read the image. It worth to be considered also when your files are stored on a remote hard drive (you may save a lot of processing time).

There are two types of compression: with information loss, like jpeg, or loss-less. Lossless compression allows the exact original data to be reconstructed from the compressed data. Lossy data compression only allows an approximation of the original data to be reconstructed, in exchange for better compression rates (often used in photography).

I use to store all my data in geotiff + internal lossless compression. Gdal offers three lossless compression for geotiff: LZW, deflate and packbits. I use LZW because it is implemented on all commercial software, but deflate seems to give better compression rate.

Exporting an image, say image.img to geotiff + compression geotiff, say new_image.tif, is simple:
gdal_translate -of Gtiff -co "compress=lzw" image.img new_image.tif

Let’s take an example of an NDVI image of Africa, say 9633*8117 pixels, 1 byte per pixel. The data amount is about 75Mb. If using geotiff with LZW compression, I’ve got file sizes of about 30Mb (the actual size varies a bit from an image to another).
I also have data which are the detection of surface water on the continent. I typically have 5 classes: the ocean, the dry land and three classes of surface water. Then the geotiff +LZW files are around 2Mb!

The time spent on decompression is not noticeable. In my case, it is even the opposite: I’ve a (very) large repository of images (Spot/vegetation images of Africa), stored on a remote machine: internal compression saves network bandwidth and time: reading a file of 2Mb vs 75Mb makes a big difference!

Creating dummy (empty) files

For testing shell tips (linux and cygwin) it is often handy to

  • work a test directory
  • make some (tons of!) files
  • Ok, I suppose you can create directories (mkdir dirname). Now, you can use touch to create (empty) files:

    touch a b c d

    will create files a, b, c and d.

    To create 200 file starting their names with file_, followed with a number and ending with .img, do

    mkdir source
    cd source
    for ((num=0;num<200;num+=1)); do touch file_${num}.img ; done

    Now you can tests the shell tips (example).

    200 files made with touch command and a bash loop