Tips for Zoe
I'm a longtime user of Zoe as an email
client, but I've found out a couple of new things today after I recently
broke it with an upgrade.
Normally an upgrade is trivial, as
the Zoe wiki says:
"Copy the new binaries over your old installation. That's all".
I did that and I was getting a Java error on trying to start Zoe up;
being a Java-dunce I didn't know what I'd cocked up, so I got the latest
JDK from Sun on the suspicion that my Java install was elderly and
possibly crufty anyway. That didn't fix it, so I had a harder look at the
error - java.lang.NoSuchMethodError:
Seeing as Lucene was
something that had changed in the upgrade I had a good look at where I'd
installed the new binaries and sure enough there were also 3 or 4 old Lucene
related jar files. I deleted all the jar files from this directory,
copied the new binaries in again, and all was well!
For some reason I also had to reset the authentication too, but that
the Zoe general list.
All in it's a nice app, and the docs - distributed as they may be -
help you fix 99% of the problems easily.
[Sat, 29 Jan 2005 13:47
Jabber Client Wanted
I'm on the look-out for another good
client that runs on
I've been using
for quite a while and it's a nice product, but it's got a few minor
niggles and the latest version has dropped Jabber support altogether, so
I'm stuck with an old version for now. Kudos however to Agile for
providing a pretty good product for free and to their tech director for
replying to my query about Jabber support in the latest version in under
30 mins. Brickbats for dropping IM's premier protocol though.
Suggestions for an alternative?
[Fri, 28 Jan 2005 12:59
Russ hits 21
On an otherwise
at least one good thing has happened. Young (almost)
officially reaches 21 (in hex anyway). Happy birthday Russ, have a beer or
two for me
[Thu, 20 Jan 2005 17:44
S60 Getting Started and Moving On
I'm a bit split on this one,
my wiki page about
Getting Started with Series 60
has started to take on a life of its own since I
first mentioned it here
back in the summer.
Since then I've probably doubled the number of useful links on the main
page, and added a raft of (sparser) pages about individual phones and
information, there's even a stub-like
Getting Started With UIQ
page. Given the amout of traffic I'm getting from search engines, I
assume this is proving to be a useful resource, and I'd like this to
remain so, but I really don't habve the time available to do full justice
to keeping up with such a wide range of potential topics. Especially when
is providing similar, but more extensive resources.
So the big question is, how do I go forward? There's 3 options as I
- 1 - Turn it into a content competition with Rui
- 2 - Keep the existing skeleton-like information and go no further
- 3 - Keep the existing information, expand it minimally where useful, and
try to put fuller content in places where more people will see it
I'm aiming for option 3, option 1 is futile and it's not a war, I'm
more than happy to link to anyone with useful content and I really don't
have the time to catch up with Rui nor his access to new phones. Option
2 is rather pointless, in time I'll have a sea of dead links. Option 3
seems the best to me - using the wiki as an index to all the best things
I can find - and with the amount of Google juice it seems to be getting,
it should help to lead others to the nuggets too. As for fuller content,
I've spent the last a year or so spreading myself too thin, but I don't
want to give up any of the places I write yet (better time management
ahead). I am aiming to get a lot more of the factual content onto
All About Symbian, with
musings and analysis spread over
All About Mobiles,
Mobitopia and this
Wikipedia in particular is light on good up to date mobile content,
so that's certainly worthy of everyone's care and attention.
[Sun, 16 Jan 2005 22:00
Blocking, One Step Forwards...
...and one step back.
The blocking ideas I
sounded great at the time, but the "deny from sbl-xbl.spamhaus.org"
line really is just a waste of time. If I'd spent more than few seconds
working out how
DNS Blackhole Lists
worked I'd have understood, fortunately I've now got a nice simple
Pythonic solution that I'll be publishing shortly.
In the meantime if you've been blocked from reading this site directly,
it'll be because you're a spammer (mail, comment or referrer) or just dead
unlucky, if it's the latter contact me and we'll see if I can help
[Thu, 13 Jan 2005 22:02
Using Blackholes to Block Spammers
Nothing earth shattering here, but in a simple move to reduce
the level of spam I've been getting on
I've blocked a few sites with
It's a "one strike and you're out" approach, any ip
address that grabs forbidden files from my robots.txt file, or engages
in comment, wiki, or referrer spam will probably be banned; it's my
site, I can be as irrational as I like! My inertia and can't-be-arsed
factor will affect the results here...
For those of you who also have high can't-be-arsed factors, here's
a potted guide to using Apache's
directives in an .htaccess file for this purpose.
- Open .htaccess in an editor
- Add something like the following lines to block ip address 184.108.40.206
allow from all
deny from 220.127.116.11
- Save .htaccess file
For testing, try entering your ip address in the deny line; you should
get 403 errors when you try to view your pages. To block further ip
addresses just add further "deny from n.n.n.n" lines.
For added hilarity and possible unwanted consequences I've also
added "deny from sbl-xbl.spamhaus.org" which should
block all ip addresses in
blacklists, I have no idea whether this really works yet; I'll have to
track down all the 403's in my access logs and see if I can establish
why they appeared. I'm not a huge fan of blacklists, but I've noticed that
a number of my regular unwanted customers already appear in this blacklist
so it appears to be worth trying out.
In summary, what I've done so far is very trivial to implement, but it
requires some manual updating. I'm going to observe how well it works in
practise for the next few weeks before automating the process further or
abandoning the experiment. I'd like to think it proves to be effective.
I don't think it'll be too resource intensive as despite it appearing
that every hit will cause a dns lookup, in fact dns does a lot of
caching and I suspect my local dns server will handle 90% or more of
[Wed, 05 Jan 2005 21:18