Monday, January 24, 2011

old web site migration

I have a very old web site which has not been maintained for years on the server that is being replaced. It was mostly an experimentation space, and I doubt it was used. I decided that the migration strategy I would use with it is to just copy over bits, or mark them to return error 410 gone, as I find 404 errors in the logs.

A quick perl script that returns errors from the access log follows.

#!/usr/bin/perl -w
use strict;

while(<>) {
    my($status) = m{^\S+ \S+ \S+ \[.*?\] ".*?" (\d+) \d+ ".*?" ".*?"};
    
    if(not defined $status) {
        print "failed: $_";
        next;
    }
    
    if($status =~ m{^[45]}) {
        print $_;
    }
}

I probably haven't updated that site in over two years. I have also considered dropping the web site completely, which may still happen after watching the access logs for a while.

Sunday, January 16, 2011

Backup MySQL to a file per database

The objective is to make a backup of a MySQL database server, and end up with a file for each database, named based on the database.

This is part of a server migration where not all of the databases will be created on the target host, and some will be renamed as they are moved to the new host.

It turned out to be a single line of shell to do the task:
mysql --user=root --password=password --batch --skip-column-names --execute 'show databases' |
while read x ; do
 echo "dumping $x..."
 mysqldump --user=root --password=password --all "$x" > "$x".sql
done
The script was run in the target directory for the backup files.

It was also revealed that one of the databases had some corruption during the backup run, which was fixed with a quick invocation of the mysqlrepair command.

DNS Server basic setup

The objective is to set up a name server on a Ubuntu server that serves some domains to the Internet. The domains being served are too complex to be managed by the provider of the domain name, as a result they are being hosted on a home server on a dynamic IP.

The environment is Ubuntu 10.10 (Maverick Meerkat) Desktop freshly installed and updates done.

In the process of offloading as much DNS responsibility as possible to external services I found that MX and CNAME records clash, so if there are MX records for a domain, then the top of the domain should not have a CNAME record, so I used an A record that points at the yi.org url redirector server, in the future I may actually update the A record dynamically.

Given that, the following is the minimal steps required to configure the name server.

install the name server software:
sudo apt-get install bind9

Set up the zone file, the top level records should include the top level records that are also provided by the external services.
/etc/bind/db.happy.yi.org:
$TTL 604800
@ 3600 IN SOA happy.yi.org. happy.happy.yi.org. (
 2011011601 ; serial
 604800 ; refresh
 86400 ; retry
 2419200 ; expire
 3600 ) ; default ttl
@ 86400 IN NS sunriseyoga.dyndns.org.
@ 3600 IN A 173.203.238.64
@ 86400 IN MX 10 ASPMX.L.GOOGLE.COM.
@ 86400 IN MX 20 ALT1.ASPMX.L.GOOGLE.COM.
www 86400 IN CNAME sunriseyoga.dyndns.org.
vnc 86400 IN CNAME sunriseyoga.dyndns.org.
mail 86400 CNAME ghs.google.com.
pages 86400 CNAME ghs.google.com.
docs 86400 CNAME ghs.google.com.
sites 86400 CNAME ghs.google.com.
site 86400 CNAME ghs.google.com.
app 86400 CNAME ghs.google.com.
blog 86400 CNAME ghs.google.com.
feather-wiki 86400 CNAME ghs.google.com.

tell the name server to load the zone by adding the following line to /etc/bind/named.conf.local:
zone "happy.yi.org" { type master; file "/etc/bind/db.happy.yi.org"; };

reload the name server
sudo /etc/init.d/bind9 reload

now the domain is being served, and will be accessible from the Internet if the NS records point at the server.

If you want to know more, read the Ubuntu BIND9 Server HOWTO.

Wednesday, January 12, 2011

encoding mono video for windows

I was attempting to figure out why one of the computers at school would not play a media file with sound, the media file played fine on my laptop, and the instructor’s office computer, but not on the classroom computer.

After trying various players, and converting the media file to various formats, it continued to not play with sound. I continued by trying other media files that I had on my laptop, they appeared to play fine. Eventually I noticed a light crackle after a fairly low quality conversion, which was a hint that is might have to do with audio channels. At this point I tried using mono output in VLC and the sound came out clear.

After making a tree of files that would not play properly in the same way, I converted the whole batch of videos using the following bit of shell code. The codecs I used are ones that stock windows understands.

# low quality mono (256kb)
#ABR=64kb
#VBR=192kb
# medium quality mono (512kb)
#ABR=64kb
#VBR=448kb
# high quality mono (1024kb)
ABR=64kb
VBR=960kb

find . -type d -print |
while read x ; do
  mkdir -p ../out/"$x"
done

find . -type f |
while read file ; do
  ffmpeg -i "$file" -acodec wmav2 -vcodec msmpeg4v2 -ab $ABR -b $VBR -ac 1 "../out/$file.avi" < /dev/null
done

Friday, January 7, 2011

Public file share

The objective is some basic network attached storage (NAS), or public file share, where any attached computer can create, read, update, and delete any file without authentication. I would not consider this secure, as anyone who can attach to the network can do whatever they want to the file space, however secure is not the objective at this time. Also if a user is accessing the same shared file space locally on the server it should behave the same as if it was being accessed over the network.

The environment is Ubuntu Desktop 10.10 (Maverick Meerkat) freshly installed and updates done.

Given that, the following are the minimal steps and configuration required to achieve the objective.

Make a public file space based on "HowTo: Create shared directory for local users (with bindfs)", this works much better than access control lists can.

install bindfs
sudo apt-get install bindfs

configure the public space to be set up on startup.
/etc/init/bind-public.conf:
description "Remount public with different permissions"

start on stopped mountall

pre-start exec install --owner=nobody --group=nogroup --mode=0777 \
--directory /export/public

exec bindfs -f --owner=nobody --group=nogroup --perms=a=rwD \
--create-for-user=nobody --create-for-group=nogroup \
--create-with-perms=a=rwD --chown-ignore --chgrp-ignore --chmod-ignore \
/export/public /export/public

and make the public space active
sudo initctl start bind-public

Now to make the space available over the network using Samba.

Install samba
sudo apt-get install samba

And here is a minimal Samba configuration to do the job.
/etc/samba/smb.conf:
[global]
       map to guest = Bad User

[public]
       path = /export/public
       guest ok = yes
       read only = no

It is not necessary to restart samba for the changes to take effect.

At this point the objective is achieved for remote connections, and any local methods for accessing the directory.

For restricted access, configure Samba to require a log-on, or only allow particular users to access the public share.