Wednesday 28 March 2007

mindterm - ssh with java applet

ssh is now almost the default way for remote login and management of a Unix/Linux box. Thanks to OpenBSD people for the great work and he great mind to open-source their work, OpenSSH is now shipped with almost all linux distros (I can’t think of any without) and built on most Unix flavors. It is even available on M$ Windoze: for open-sourced offerings, there is OpenSSH server within the full cywin enviroment, or some strip-down evsrion, like CopSSH, ...; recently there is also MinGW based offerring which doesn’t require cygwin1.dll (it is a nightmaire if you have multiple different versions of cygwin1.dll on your windows box). There is freeware product called freeSSHd as well but it is not open-sourced.

Put the availablity of ssh server aside, you also need some ssh client program to access an ssh server. On Unix/Linux, this is a non-issue. On windows there is open-sourced putty, as well as the ssh client program coming with openssh package (either cygwin1.dll based or MinGW+MSYS based).

But what if there is no ssh client installed on the system and you don’t have permission to install something on the box?

Well, mindterm comes to remedy. It can serves as java applets that you can download off from a website.

Say, you access a webpage which has the mindterm applet served (can be either in a popup mode or an embeded mode), with a java plugin enabled web browser, you can ssh into the box, without a ssh client program pre-existing on your local machine.

Notes:

1) mindterm is not open-sourced. The original version had source publically accessible though, but that version only supports ssh protocol version 1.


2) the ssh port, i.e., 22/tcp, still needs to be open on the server. You may access the applet on http port (80/tcp by default) , the applet itself still connects to the 22/tcp port from the client machine to the server. The port 22 is not tunnelled into the port 80 traffic. If you configure your ssh server to listen to a non-standard port, you need to change the mindterm settings accordingly.


Sunday 25 March 2007

iptables: defeat ssh brute-force attacks

Yesterday's post I described a method to do port-knocking protection with iptables only. Near the end of the post, I mentioned that using the same iptables recent module, one can effectively defeat the ssh brute-force attacks. At the time of writing I thought about a better implementation than the sample described in this link.

In short, the critical part for the iptables rules for the purpose would be:



iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl -j DROP


The first iptables command records the seen IP address into the recent table from where an incoming ssh connection is attempted. The second one will drop the packets if the ssh login attempts happen more than 3 times within last 60 seconds from the same IP, as it is considered as ssh brute-force attacks.

My idea of improvement to the matching rules is to add multiple levels of matching, i.e., more than 3 times within last 60 seconds as the first level matching, and we also blocks ssh loin attempts more than 5 times within last 2 minutes, as the second level matching. This second levels of matching will drop the more packets which happens slightly less frequent than the first-level attacks but happens more persistently, because the normal ssh logins will be less likely like this. Similarly we can add even levels, e.g., more than 10 times within last 5 minutes, and more than 15 times within last 10 minutes, etc, etc, etc.

After bit googling, I found this is already mentioned by someone, with clearer sample here:

Saturday 24 March 2007

port-knocking with iptables only

port-knocking protection is a method that you open a specific port only after you attempt to connect some other ports, in a right order, just like you open a door with right sequence of knocking. It adds another layer of protection to a server. It is useful to allow only authorised access, for example, a firewall or a CVS server on which ssh accesses are authorised to a limited number of users. It is not suitable for servers running public services like web server, mail server, etc.

Most implementation of port knocking protections involve a dedicate a daemon that keeps monitoring the system logs, in order to detect the knocking sequence.

Here I describe a method to implement the port knocking protection, inspired by A. P. Lawrence's article. It use iptables only. On linux, iptables is used for firewalling (v2.2 and older use ipchains), so with this implementation, no separate dedicated daemon is run.

The key feature of iptables to be used here is the table called “recent”, which was introduced for later version of iptables. With some old version of linux distros, you may need to update iptables package. More information about iptables module recent can be found here.

The advantage of this implementation is that it is elegant (no additional software installed) and lightweight (no additional daemon running). The limitation is that this is only suitable for a fixed mapping. But generally this is good enough, it is especially effective for firewalling out the attacking of ssh worms, script kiddies, port scanning or the like.

Here is a sample script:

#!/bin/bash
# $IPTABLES rules to be run by /etc/rc.d/rc.local
#
# 08 Feb, 2005 -- implement an elegant port-knocking mechanism to restrict ssh access for this machine
#
#### where is iptables binary?
IPTABLES=/sbin/iptables
#
#### define the knocking port set here
# I take my telephone number and break it down into three pieces for easy memory.
PK_PORT0=1223 # 1st knocking port
PK_PORT1=569 # 2nd knocking port
PK_PORT2=715 # 3rd knocking port
#
#### Set some sensible kernel params that may already be there
#### Assume necessary iptable modules are loaded.
/bin/echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all
/bin/echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
/bin/echo "0" > /proc/sys/net/ipv4/conf/all/accept_source_route
/bin/echo "0" > /proc/sys/net/ipv4/conf/all/accept_redirects
/bin/echo "1" > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses
/bin/echo "1" > /proc/sys/net/ipv4/conf/all/log_martians
/bin/echo "1" > /proc/sys/net/ipv4/ip_forward
#
#### Flushing tables
$IPTABLES -F
$IPTABLES -t nat -F
$IPTABLES -X
$IPTABLES -t nat -X
#
#### Set all policies
$IPTABLES -P INPUT DROP
$IPTABLES -P FORWARD DROP
$IPTABLES -P OUTPUT DROP
#
#### Being paranoid to avoid intrution during the excutation of this script
$IPTABLES -I INPUT 1 -i ! lo -j DROP
$IPTABLES -I FORWARD 1 -i ! lo -j DROP
$IPTABLES -I OUTPUT 1 -o ! lo -j DROP
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A OUTPUT -o lo -j ACCEPT
#
#### Filter out some obviously proofed packets from internet
$IPTABLES -A INPUT -s 10.0.0.0/8 -i $EXT_IF -j DROP
$IPTABLES -A INPUT -s 127.0.0.0/8 -i $EXT_IF -j DROP
$IPTABLES -A INPUT -s 172.16.0.0/12 -i $EXT_IF -j DROP
$IPTABLES -A INPUT -s 192.168.0.0/16 -i $EXT_IF -j DROP
$IPTABLES -A FORWARD -s 10.0.0.0/8 -i $EXT_IF -j DROP
$IPTABLES -A FORWARD -s 127.0.0.0/8 -i $EXT_IF -j DROP
$IPTABLES -A FORWARD -s 172.16.0.0/12 -i $EXT_IF -j DROP
$IPTABLES -A FORWARD -s 192.168.0.0/16 -i $EXT_IF -j DROP
#
## allow all outgoing packets
$IPTABLES -A OUTPUT -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES -A OUTPUT -o eth0 -j ACCEPT
#
#### allow some ICMP packets for friendly probes
#### it also gives a hacker indication of the machine existence though, :)
$IPTABLES -N icmp-packets
$IPTABLES -A icmp-packets -p icmp --icmp-type redirect -j DROP
$IPTABLES -A icmp-packets -p icmp --icmp-type echo-request -j ACCEPT
$IPTABLES -A icmp-packets -p icmp --icmp-type echo-reply -j ACCEPT
$IPTABLES -A icmp-packets -p icmp --icmp-type destination-unreachable -j ACCEPT
$IPTABLES -A icmp-packets -p icmp --icmp-type source-quench -j ACCEPT
$IPTABLES -A icmp-packets -p icmp --icmp-type time-exceeded -j ACCEPT
$IPTABLES -A icmp-packets -p icmp --icmp-type parameter-problem -j ACCEPT
#
#### SYN flood protection, which is typical for DDOS attack
$IPTABLES -N syn-flood
$IPTABLES -A syn-flood -m limit --limit 1/s --limit-burst 4 -j RETURN
$IPTABLES -A syn-flood -j DROP
#
#### Log packet fragments just to see if we get any, and deny them too
#### This may be not necessary, thus commented out.
#$IPTABLES -A INPUT -i eth0 -f -j DROP
#
#### These chains are for the port-knocking protection
$IPTABLES -N port-knocking
$IPTABLES -N knocking-okey
$IPTABLES -N knocking-oops
#### ====== actual contents of port knocking chains =======
#### Using module recent to implement a simple and elegant port knocking mechanism
#### First we make sure the port knocking order is correct.
#### Note: --update option updates the hitcount number, while --rcheck does not;
#### Neither of them changes the last-seen time.
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT0 -m recent --rcheck --hitcount 1 -j knocking-oops
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT0 -m recent --set -j REJECT --reject-with host-prohib
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT1 -m recent --rcheck --hitcount 2 -j knocking-oops
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT1 -m recent --update --hitcount 1 -j REJECT --reject-with host-prohib
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT2 -m recent --rcheck --hitcount 3 -j knocking-oops
$IPTABLES -A port-knocking -p tcp --dport $PK_PORT2 -m recent --update --hitcount 2 -j REJECT --reject-with host-prohib
#### Now we need make sure the port knocking sequence is continuous.
#### If some random port is accessed during the knocking process, then the recent table will be cleared,
#### so that one has to do the knocking all over again.
$IPTABLES -A port-knocking -p tcp -m recent --remove
#$IPTABLES -A port-knocking -p tcp -m recent --remove -j REJECT
#### chain knocking-okey is to accept connection and clear the record
$IPTABLES -A knocking-okey -m state --state NEW -p tcp -m recent --remove -j ACCEPT
#### chain knocking-oops is to drop or reject connection and clear the record
$IPTABLES -A knocking-oops -m state --state NEW -p tcp -m recent --remove
#$IPTABLES -A knocking-oops -m state --state NEW -p tcp -m recent --remove -j REJECT
#### ======= port-knocking handling bulk end =============
#
#### Now the real part
$IPTABLES -A INPUT -i eth0 -p icmp -j icmp-packets
$IPTABLES -A INPUT -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES -A INPUT -i eth0 -p tcp --syn -j syn-flood
#$IPTABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT
#### Instead of direct accept, ssh login attempts are first filtered with port knocking, thus the above line is commented out
$IPTABLES -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --rcheck --seconds 20 --hitcount 3 -j knocking-okey
$IPTABLES -A INPUT -i eth0 -p tcp -j port-knocking
#
#### Now delete the blocking rules put in at the beginning
$IPTABLES -D INPUT 1
$IPTABLES -D FORWARD 1
$IPTABLES -D OUTPUT 1

In above script, the port-knocking chain makes sure the pre-defined ports are knocked, continusly and in a correct order; the knocking-okey chain accepts the packet and clears the recent table; the knocking-oops chain drops or rejects the packet and clears the recent table. It is more secure to drop a packet than to reject it, because a hacker could not see any result of the packet; but it takes more time to knock as you will usually wait for the time out error before sending out another packet (to knock another port).

Additional note: it is very easy to use the recent module of iptables to block too-frequent ssh longin requests, which is generally a malicious brute-force attack. There is an explanation and sample here.

Friday 23 March 2007

samba for time synchronization

A somewhat hidden feature of Samba is that you can use Samba as a local timeserver for your Windows clients. The following line is added to enable this feature. By default, a windows client will talk to a micro$oft time server if time synchronization is enabled. So now you can let those clients in LAN talk to the samba server with the following setting in the samba configuration file:

timeserver = yes



samba: use of netbios name and aliases

In samba configuration file, /etc/samba/smb.conf, I generally add these options,

 netbios name = linux
netbios aliases = debian ubuntu

Then I can have different virtual samba servers, just like virtual web servers with apache. Isn’t it cool?

Using the apache analogy, the netbios name = option is set the default samba host, whilst the netbios aliases = option is to setup additional virtual hosts. samba can have different controls for these virtual hosts.

In a cooperate environment, the best strategy is to set a samba share service bind to the service name (e.g., file-server, public-library, etc), not to the machine name that actually serves the file-sharing, so that when the machine is replaced or gone away, the file-sharing services can be still remains the same.

samba: secure your home shares

Usually I add the following two options for home share on a samba server, to tighten up the security:

[homes]
...
valid users = %S
path = /home/%S

The first option is to set, say user1, to only see his own home directory as the home share (he can see other public shares as well), not to see the home directory of user2.

The second option limits that the home directory of, say user1, as the share, must be /home/user1. For example, usually root’s home directory on many linux distros is /root, therefore it can not be accessed as the samba home share.

Thursday 22 March 2007

apache: mod_rewrite/mod_proxy vs mod_proxy_html

With the native apache modules, mod_rewrite or mod_proxy, we can change strings in http headers, thus achieve the so-called proxy mechanism on the apache server. However, sometimes the changes of http request headers are not enough.

For example, I have experienced to archive an existing ZWiki site and re-host to another server. On the new server, I thought it would be nice to serve the ZWiki behind the apache server and users access it with a new virtual-host address.

The old ZWiki archive has links some of which can be changed with either mod_rewrite or mod_proxy; some other can not be changed with the apache native modules, however. Because those links are in the raw html pages. The apache modules only manupliate http headers. What can I do?

After a bit googling, I found a nice third-party module, mod_proxy_html, can come to remedy this. It does exactly what it says, changing the served html pages on the fly.

The compilation and linking to apache are very easy once you follow the documentation. The configuration is also a breeze. Arha.

apache settings for svnparentpath

Subversion http://subversion.tigris.org/ is a wonderful open-sourced version control system. With the mighty apache http://www.apache.org/, you can do very fine access control.

Here I record a small trick to deal with a problem when I set up apache+subversion.

For apache to view a subversion repository, you can either use SVNPath or SVNParentPath directive (you have to make sure the WedDAV is enabled). The former explicitly specify the location of a repository; the latter, as the name hints, specify the parent path of several repositories. This is quite handy when you want to host multiple repsitories. You just put all of them in the same directory and apache will pick them up automatically. So you don’t need to change the apache configuration everytime you add/delete a repository, if you use SVNParentPath over SVNPath.

But there will be a minor problem here. For example you decide to put repositories repo1, repo2, repo3, ... under /var/svn/ directory, and you set apache to browse them as http://www.mysite.org/svn/repo1/, ..., which is all fine, until you type http://www.mysite.org/svn/ in your browser address bar. Err, what have you got? An ulgy svndav error message. The reason is because /var/svn itself is not a repository, all its subdirectories are.

So what you can do to get rid of the scaring error message?

The apache mod_rewrite comes to rescue!

At least you have two choices here. Both use mod-rewrite to make apache serve the browser with a different url: (1) you rewrite the request with another page where all available repositories are listed (you’ll update that simple page whenever you add/remove a repository of course); (2) you redirect the request to a default repository you chosen (this works only when the default repository is not removed/renamed of course). You can choose whichever you like. But I found the second choice is suitable for apache virtual hosts where you don’t want a request escaping the virtual host scope.

Here is the sample part of /etc/httpd/conf/httpd.conf for the first option: (you browse http://www.mysite.org/svn/ and get a proxy page, http://www.mysite.org/svnrepo.html, where you can list all available repositories)

# subversion settings

DAV svn
# Set the parent path for all repositories.
SVNParentPath /var/svn
# Turn off all path-based authorization thus increase speed (default is on).
SVNPathAuthz Off
# For per-directory access control policy
#AuthzSVNAccessFile /var/svn/httpaccess
# Limit write permission to list of valid users.

AuthType Basic
AuthName "My Subversion Repository"
AuthUserFile /var/svn/httpauth
Require valid-user

# Use mod_rewrite to serve a proxy page if unspecified.
# Otherwise requests on /svn/ receive a DAV-SVN error as the directory
# itself is not a repository - its subdirectories are.
RewriteEngine on
RewriteCond %{REQUEST_URI} ^/svn/$
# can rewrite to a full url or better only rewrite to the requested uri
# Also note the P or proxy flag here for proxy.
#RewriteRule /svn/ http://svn.mysite.org/svnrepo.html [proxy]
RewriteRule /svn/ /svnrepo.html [proxy]

and here is sample for the second option: (you browse http://svn.mysite.org/ and get redirected to the default repository, http://svn.mysite.org/software/)

# subversion settings

# DocumentRoot has actually no effect here.
# In fact it should be omitted to avoid confusion.
#DocumentRoot /var/svn
ServerName svn.mysite.org

DAV svn
# Set the parent path for all repositories.
SVNParentPath /var/svn
# Turn off all path-based authorization thus increase speed (default is on).
SVNPathAuthz Off
# For per-directory access control policy
#AuthzSVNAccessFile /var/svn/httpaccess
# Limit write permission to list of valid users.

AuthType Basic
AuthName "My Subversion Repository"
AuthUserFile /var/svn/httpauth
Require valid-user

# Use mod_rewrite to set default repository if unspecified.
# Otherwise requests on / receive a DAV-SVN error as the directory
# itself is not a repository - its subdirectories are.
RewriteEngine on
RewriteCond %{REQUEST_URI} ^/$
# can rewrite to a full url or better only rewrite to the requested uri
#RewriteRule / http://svn.mysite.org/software/ [redirect=permanent]
RewriteRule / /software/ [redirect=permanent]

Note:

  1. In above samples, the basic apache authentication is used. Anonymous users can read the repositories but only authenticated users have write accresses. The authentication info is stored in /var/svn/httpauth which is managed by the htpasswd utility.
  2. The above two samples assume you have webdav and svndav modules loaded appropriately. In most case the modules are enabled with a seperate apache configuration file, say, /etc/httpd/conf.d/subversion.conf, whilst I recommend to put the above configurations in the apache master confoguration file, /etc/httpd/conf/httpd.conf.

Update:

  1. Since v1.3 and later versions of subversion (specifically, mod_dav_svn), the Apache httpd-based server can now display (in a web browser) the collection of repositories exported by the SVNParentPath directive: simply set ‘SVNListParentPath on’ in the apache configuration file. Therefore, the hack described in this page is now largely irrelevant. ;)

re-root subversion with svn+ssh schema

It is simple to re-root a subversion repository with svn+ssh accessing schema.

The easiest one to re-define the svn command as an alias. For example, if you have repo1, repo2, ... in /var/svn directory. Usually you access them with svn+ssh://myserver/var/svn/repo1/, svn+ssh://myserver/var/svn/repo2/, ...

Now if you want to access them with shortened URLs, like svn+ssh://myserver/repo1/, svn+ssh://myserver/repo2/, ... What can you do?

You can compose a simple script like this:

#!/bin/sh
# /usr/local/bin/svnserve
# This script redefines the svn command so that svn repositories are re-rooted.
# Please make sure this script is before the system command, /usr/bin/svn, in
# the search path. Also please note the proper quotation for commandline options.
#alias svnserve='/usr/bin/svnserve -r /var/svn "$@"'
/usr/bin/svnserve -r /var/svn "$@"

and put it in /usr/local/bin/ and name it as svnserve as well. This is because the OpenSSH server built with default compilation options will put /usr/local/bin before /usr/bin in the searching path for executables.

The above method works when you have a system account on the server.

With new version of OpenSSH, a nice feature called proxy command is added. So you can let multiple users access a subversion repository, using the same system account but with different identities! Say, you decide to use a system account, svn, for this. Then you need to hack ~svn/.ssh/config and ~svn/.ssh/authorised_keys, where ~svn refers to the home directory of the svn account as usual, and in our case, it would be /var/svn.

I’ve got the second method worked but it is slightly complicated. It involves defining proxy commands for different public keys. Check ssh_config and sshd_config manpages for the syntax of ~/.ssh/authorised_keys file.

Oh, did I mention that for this to work you have to use key-based authentication for ssh? You better use key-based authentication for svn+ssh accesses anyway. Otherwise you’ll get bored to type in your password again and again as subversion doesn’t accept pushed client authentication. According to the subversion design document, this is on purpose. It pulls authentication info from a client whenever needed, for a better security.


Additioanlly, on the client side you can define your own schema with this svn+ approach. The following example defines an essh command which is actually a wrapper to the ssh command:

#!/bin/sh
# ~/bin/essh
# a wrap script for svn+ssh scheme to define the svnserve root directory
# the evniroment variable SVN_SSH should be set to this script
/usr/bin/ssh $1 /usr/bin/svnserve -r /var/svn/ -t

With this command defined, you can now access the respository sitting at /var/svn/software/ with the URL below

svn+essh://myserver/software/

Apparently you can insert in more options into the wrapper script. For example, you change the last line to:

/usr/bin/ssh -l svn $1 /usr/bin/svnserve -r /var/svn/software/ -t

or

/usr/bin/ssh $1 /usr/bin/svnserve -r /var/svn/software/ -t --tunnel-user svn

then you access the same repository with svn+essh://myserver/ URL and you always login as user svn.

http, ssl/tls and virtual hosts

Traditionally http over ssl/tls has problems with name-based virtual hosts. There are several ssl certificates each of which is binded to one of the virtual hosts. Because the http requests sent over ssl/tls are encrypted, the server can not figure out which certificate to be used for the client-server hand-shake. For IP-based virtual hosts, this is not an issue as each certificate can be bind to an unique IP.

Mostly http over ssl/tls is implemented with a different port (443/tcp), rather than the standard http port (80/tcp). The conection is encrypted from the beginning. For this we even use a different URL prefix, i.e., https instead of http. This situation is similir to smtps for smtp over ssl and smtp the plain connection. However, smtps is now obslete and the stardard way for excrypted smtp connection is to upgrade a non-ecrypted smtp session to over tls. This way, both plain and encrypted smtp connections can be listened on the same port (25/tcp).

There is an internet standard to defaine how to similiarly upgade a plain http session to over tls. See RFC2817.

If this is implemented, the above-emtioned problem for name-based virtual hosts over ssl/tls will be naturally solved. Because the requested server name is in the plain-text http header, the encryption will be started after the connection is established.

Unfortunately this RFC isn’t widely implemented. The number 1 http server, apache implemented it in v2.1; the first stable version supporting this is v2.2, which was released in December 2005. I don’t care about IIS, but like many other cases (jpg2000, alpha rendering in png and CSS2 immediately come into my mind), Micro$oft holds technology advance again because of its dominant market share at the client side: IE doesn’t support it. Even my beloved mozilla firefox doesn’t support this feature (They planed to support RFC2817 in v3, see their roadmap).

There is another way around this issue: the server name indication (SNI) extension for TLS. This is one of various extenstions specified in another internet standard, see RFC3546.

The TLS SNI extension will also allow a client to tell the server which server is contacting, in the extended client hello. The traffic is always encrypted from the beginning as the https protocol is used, but the server understands which virtual hosts to be served after ssl handshaking.

It was note that Firefox 2 has already support RFC 3546. Check here. Also starting with IE 7, Micro$oft supports the TLS SNI extension.

On the server side, although apache v2.2 has native support for RFC 2817, the third-party module, mod_gnutls, has to be used for RFC3546 support. This module adds RFC3546 support to both apache v2.0 and apache v2.2.

Therefore for now, we’ll have to use apache (v2.0 or v2.2) plus mod_gnutls on the server side, and mozila firefox v2 or M$ IE 7 on the client side, for an SNI-enabled name-based virtual hosts over tls. After firefox v3 is released, we can use apache v2.2 with firefox v3, and enjoy plain and encrypted http connections on the same port! Can’t wait for that day.

Wednesday 21 March 2007

su vs sudo

Both su (substitute user) and sudo can be used to do some operations as the user other than the currently login user. Although any other user can be chosen, mostly the root user is picked, so that some administration tasks can be carried out.

To run a single command, you do with su -c or sudo . This is default to run the command as root, if you want to run as another user, specify the username at the command line, i.e., su -c or sudo -u .

To switch to another user and obtain a shell, you do su or sudo -s -u . Again if username is not specified, by default root is the target account. Additionally, su -l (or su - ) or sudo -H -s -u adherent the value of the HOME environmental variable of the target user.

As above described, the functionalities provided by su and sudo are very similar. So what is the difference?

The key difference is that if you use su, you have to know the password of the target user; not if you use sudo. For this mechanism, by default all normal users can use su (as long as (s)he knows the target user’s password). The use of sudo is controller by /etc/sudoers, which can be set to require a user to provide his/her own password so as to run sudo, or, it can grant a user to use sudo without any authentication. What’s more, sudo can limit a user to be able to run only a certain set of commands.


I think sudo is better because it doesn’t give out the root password and has more control of running commands. In fact, ubuntu by default ships with a disabled root account and allows the first normal user account to have sudo access.


Tuesday 20 March 2007

My favorite system administration tools

Here I list top-10 utilities that I can’t live without. Well, maybe not exactlly 10, but anyway...

  • bash. this should be an necessity, isn’t it. I also use tcsh and zsh, but I always feel home with bash
  • ssh, specifically the open-sourced version, openssh. ssh is really a god-sent
  • keychain/fsh. helpper tools for ssh. keychain is handy for passpharse cache management for ssh (and gnupg). The “f” in fsh stands for fast. With the newer version of openssh, fsh is not necessary any more.
  • netcat. you can view it as a net version of cat, a good cat, really
  • vim. I wish I could practice more with emacs-nox, but right now I get used to vim
  • wget/lftp. sometimes I use ncftp
  • elinks. I prefer it to lynx
  • fetchmail/procmail. retrieve and process emails
  • mutt/slrn. manage email/news with console
  • screen. with gnu screen, you can have multiple consoles with a single terminal, and you can do text version of vnc
  • gnupg. to do encryption/signature stuff
  • expect. to do interactive operation in a script
  • nx. that is so much faster than vnc

And here are I list several technologies that I like and they all involve server/client or p2p mode

  • web system. apache httpd server and various clients
  • email system. open-sourced smtp/imap4/pop3 servers and clients
  • openssh system. both the server and the client, including scp/sftp and others
  • jabber im system.
  • voip.

I am not very fancy about ftp and news systems. I think they are suitable in some area but they are becoming less and less important.