Recently I did an entry on SCP (Secure CoPy) which uses SSH to copy a single file over a secure ssh tunnel to a remote server or to copy a remote file to a local directory. This works great for a single file, but what if you want to do an entire directory?
Well one way is to tar up the directory, then copy the file to the remote server (using scp perhaps?) and then login to the remote server via SSH (you aren't using telnet any more right?) and then untar the file on the remote side. Pretty simple, but since we are geeks, we try to do things as efficiently as possible (even if there are better solutions).
Enter tar over SSH.
Tar has the great ability to send data to stdout/stdin using the «-» (dash) as a filename in the command line. So using that we can string together pipes to send the data to a remote server. Let's explore how:
tar -zcf - ./ | ssh remoteuser@remotehost tar -C /path/to/remote/dir -zxf -
What this does is pretty simple: it creates a compressed tar file of the current directory (./) and sends it to stdout (-). We catch stdout with the pipe (|) and then call ssh to connect to a remote server where we execute the remote command tar. Using the remote tar we change directory to /path/to/remote/dir and then run a decompress extraction of stdin (-).
About the only caveat of this method is that tar must be in the «remoteuser's» path, otherwise you would have to specify the fully qualified path to the binary for tar.
Great way to transfer a bunch of files securely, as well maintain ownership and permissions.