Skip to content
 

Running local script remotely (with arguments)

So, we have a script we want or need to run on loads of remote servers, but we don't want to copy it to every server, to avoid a maintenance nightmare. (Let's not focus on why we may find ourselves in that situation; it may be because there is no choice, or whatever reason. Agreed it's undesirable. That's life.)
Another use case is as follows: we need to develop a script that will run on a machine where the editing tools aren't as good as those we are used to; so we want to develop on our familiar local machine, but once in a while, while developing it, we need to run it on the target machine (presumably to test it).

If the code to run is short and simple, we can just put it inline:

$ ssh user@remote 'our code here'

However this quickly becomes difficult to type, prone to errors, and proper quoting can be a nightmare too. So let's assume that the script is stored in a file.

Method 1: stdin

Well, ssh runs a shell on the remote host, so why not do

$ ssh user@remote < local.sh

Sure, that works and looks easy right? But things start to change if we need to pass arguments to the script.

Let's use the following example script (real ones, of course, will be much more complex):

# local.sh
printf 'Argument is __%s__\n' "$@"

This code is representative of the task, because it lets us check that arguments are seen correctly by the script even when it runs remotely. This is the critical part; if that works, we don't have to worry about the rest of the code; that will mostly "just work" (with some caveats, noted at the end).

So, since the remote shell is reading stdin anyway, this should also work:

$ ssh user@remote 'bash' < local.sh
Argument is ____

And so should this (but doesn't):

$ ssh user@remote 'bash /dev/stdin' < local.sh
bash: /dev/stdin: No such device or address

What's happening here? Let's try to find out:

$ ssh user@remote 'ls -l /dev/stdin' < local.sh
lrwxrwxrwx 1 root root 15 2011-07-08 10:45 /dev/stdin -> /proc/self/fd/0
$ ssh user@remote 'ls -l /proc/self/fd/0' < local.sh
lrwx------ 1 user users 64 2011-08-10 11:04 /proc/self/fd/0 -> socket:[28463861]

So stdin exists but it's connected to a UNIX socket (this is part of how ssh sets things up when connecting). Why is it failing?

$ ssh user@remote 'strace bash /dev/stdin' < local.sh
...
open("/dev/stdin", O_RDONLY)            = -1 ENXIO (No such device or address)
...

Doing some research, it appears that Linux disallows open()ing a socket, and returns ENXIO when that is attempted. (And yes, "bash < /dev/stdin" fails equally). Can we work around that? Let's see if cat works:

$ ssh user@remote cat < local.sh 
# local.sh
printf 'Argument is __%s__\n' "$@"

Predictably, it works since cat just reads its stdin (which is set up before it is run) without explicitly attempting to open() it. So we can use this to accomplish our goal:

$ ssh user@remote 'cat | bash /dev/stdin' < local.sh
Argument is ____

This trick turns bash's stdin into a pipe (rather than a socket), which it can thus open successfully.

Update 05/04/2013: on recent enough versions of ssh, stdin is a pipe and not a socket, so the simpler version

$ ssh user@remote 'bash /dev/stdin' < local.sh

does actually work, and can be used in place of the more complicated cat | bash /dev/stdin in the following examples.

Now, this may just look like a fancy way of rewriting the original attempt, but it has an important advantage: since /dev/stdin looks like the name of the script to run to the remote shell, it allows us to specify arguments after it, as follows:

$ ssh user@remote 'cat | bash /dev/stdin arg1 arg2 arg3' < local.sh
Argument is __arg1__
Argument is __arg2__
Argument is __arg3__

(Thanks to Stéphane Chazelas and Marcel Bruinsma for suggesting the above ideas during an old discussion on comp.unix.shell).

We're almost done. So far, we are hardcoding the arguments in the single-quoted string; it would be nice to have a way of putting variables there. We can create a wrapper script that does the hard work for us:

#!/bin/bash
# runremote.sh
# usage: runremote.sh remoteuser remotehost arg1 arg2 ...

realscript=local.sh
user=$1
host=$2
shift 2

ssh $user@$host 'cat | bash /dev/stdin' "$@" < "$realscript"

The expansion of "$@" is replaced by the actual arguments. Let's run it:

$ runremote.sh user remote arg1 arg2 arg3
Argument is __arg1__
Argument is __arg2__
Argument is __arg3__
$ runremote.sh user remote arg1 "arg2 with spaces" arg3
Argument is __arg1__
Argument is __arg2__
Argument is __with__
Argument is __spaces__
Argument is __arg3__

Ok, so it's not perfect yet. The problem is that with ssh, the supplied command string is (re)evaluated by the remote shell, and that turns what is meant to be a single argument "arg2 with spaces" into three separate arguments. For the same reason, there may also be problems with other characters that are special to the shell like globbing characters, escapes and quotes. So, the wrapper script needs to escape the arguments it's given before putting them into the command string for the remote ssh. Since we want the wrapper to be transparent, and want to be able to supply arbitrarily complex arguments, the task can rapidly become an escaping nightmare, which is one of the things we wanted to avoid in the first place.
Fortunately, bash has just the right feature for this: the builtin printf command supports the %q specifier:

%q     causes printf to output the corresponding argument in a format that can be reused as shell input.

Let's try it:

$ printf '%q\n' "argument with space"
argument\ with\ space
$ printf '%q\n' "argument with 'single quotes'"
argument\ with\ \'single\ quotes\'
$ printf '%q\n' 'argument with "double quotes"'
argument\ with\ \"double\ quotes\"
$ printf '%q\n' 'argument with *? glob and $ other ` special { chars'
argument\ with\ \*\?\ glob\ and\ \$\ other\ \`\ special\ \{\ chars
$ foo=$(printf '%q\n' 'argument with *? glob and $ other ` special { chars')
$ echo "$foo"
argument\ with\ \*\?\ glob\ and\ \$\ other\ \`\ special\ \{\ chars
$ eval echo "$foo"
argument with *? glob and $ other ` special { chars

Looks good. Since arguments to a script can't be modified directly, we can use an array to store and modify them, then use the special "${array[@]}" construct to pass them (which behaves the same as "$@"). We can also generalize the wrapper to accept the name of the local script to run remotely, so:

#!/bin/bash
# runremote.sh
# usage: runremote.sh localscript remoteuser remotehost arg1 arg2 ...

realscript=$1
user=$2
host=$3
shift 3

declare -a args

count=0
for arg in "$@"; do
  args[count]=$(printf '%q' "$arg")
  count=$((count+1))
done

ssh $user@$host 'cat | bash /dev/stdin' "${args[@]}" < "$realscript"

Let's try it:

$ runremote.sh local.sh user remote 'arg1 with spaces and "quotes"' 'arg2 with *? glob and $ other ` special { chars'
Argument is __arg1 with spaces and "quotes"__
Argument is __arg2 with *? glob and $ other ` special { chars__

Now runremote.sh can be used to run a local script remotely with arbitrary arguments. Of course, quoting and/or escaping must still be done correctly locally if needed, so that the script sees the intended number of arguments.

Method 2: stdin, revisited

As a variation of the previous method, we could have the wrapper prepend some code to the local script so it magically finds its arguments already set, for example something like this:

#!/bin/bash
# runremote.sh
# usage: runremote.sh localscript remoteuser remotehost arg1 arg2 ...

realscript=$1
user=$2
host=$3
shift 3

# escape the arguments
declare -a args

count=0
for arg in "$@"; do
  args[count]=$(printf '%q' "$arg")
  count=$((count+1))
done

{
  printf '%s\n' "set -- ${args[*]}"
  cat "$realscript"
} | ssh $user@$host "cat | bash /dev/stdin"

Note the "${args[*]}" expansion, which should not normally be used, but here it's useful as it expands as a single argument (whereas "${args[@]}" would expand to multiple arguments, each of which would be formatted by printf's format specifier - not what we want here).
Again this works:

$ runremote.sh local.sh user remote 'arg1 with spaces and "quotes"' 'arg2 with *? glob and $ other ` special { chars'
Argument is __arg1 with spaces and "quotes"__
Argument is __arg2 with *? glob and $ other ` special { chars__

Update 19/02/2012: thanks to Joseph's comment I realized that using /dev/stdin with this approach isn't needed at all, since we don't pass any argument directly on the command line. So the code could be changed to directly invoke the shell on the remote system:

#!/bin/bash
# runremote.sh (revised, not dependent upon /dev/stdin)
# usage: runremote.sh localscript remoteuser remotehost arg1 arg2 ...

realscript=$1
user=$2
host=$3
shift 3

# escape the arguments
declare -a args

count=0
for arg in "$@"; do
  args[count]=$(printf '%q' "$arg")
  count=$((count+1))
done

{
  printf '%s\n' "set -- ${args[*]}"
  cat "$realscript"
} | ssh $user@$host "bash -s"

This makes it possible to use this approach even on systems that don't have the special /dev/stdin file (or equivalent). (The -s switch seems to be optional with bash, as it will read from stdin anyway, but it may be required with other shells). Thanks Joseph!

Method 3: copy-and-execute

This uses a different approach; the wrapper just copies the file to the remote machine, and runs it:

#!/bin/bash
# runremote.sh
# usage: runremote.sh localscript remoteuser remotehost arg1 arg2 ...

realscript=$1
user=$2
host=$3
shift 3

# escape the arguments
declare -a args

count=0
for arg in "$@"; do
  args[count]=$(printf '%q' "$arg")
  count=$((count+1))
done

scp -q "$realscript" "$user"@"$host":/some/where
ssh $user@$host bash "/some/where/$realscript" "${args[@]}"

This does work, however in my opinion is less preferable because it leaves the file around on the remote machine; ok, the wrapper could be changed to remove it after it's run, but it still looks less clean than the other methods (also, it needs a place where to save the file remotely, and it makes multiple ssh connections every time, one to copy the file, one to run it, and optionally another to delete it).
However it does have the advantage that the script's standard input remains available (see Caveats below).

Caveats

The first thing to note is that the scripts that we are going to run, even if they reside on the local machine, need to behave correctly on the remote machine, so all the paths, commands invoked, temporary files and other references have to be valid on the remote machine, not the local one. Also the features that it uses must be supported by the remote shell that you invoke. This may seem obvious, but it is easily overlooked, especially if the script is being developed on the local machine.

The second thing to consider is that, if the script is run through the remote shell's standard input, it can't use commands that read from unredirected standard input; if it did, those commands would swallow part or all of the script themselves. So ensure that all such commands have their stdin appropriately redirected, or alternatively, use method three above ("copy-and-execute").

Update 26/08/2011: the method that uses /dev/stdin works fine with Perl too (mostly, the same caveats apply). So now the runremote.sh script can be made even more general by accepting an argument that specifies the command interpreter to run on the remote machine:

#!/bin/bash
# runremote.sh
# usage: runremote.sh localscript interpreter remoteuser remotehost arg1 arg2 ...

realscript=$1
interpreter=$2
user=$3
host=$4
shift 4

declare -a args

count=0
for arg in "$@"; do
  args[count]=$(printf '%q' "$arg")
  count=$((count+1))
done

ssh $user@$host "cat | ${interpreter} /dev/stdin" "${args[@]}" < "$realscript"

14 Comments

  1. Pavan says:

    Hey Walder,
    What if I had to run the multiple lines of code (not in a file but available to me) with Method 1 (stdin) and still manage to send arguments? The output should be read in JSON format.

    • waldner says:

      You could save the code into a temporary file and use it. Or this might also work (UNTESTED):

      ssh user@remote 'cat | bash /dev/stdin arg1 arg2 arg3' < <(echo 'your
      multiline
      script
      here')
      
  2. Dennis McRitchie says:

    Thanks very much for this helpful tutorial. Helped me out no end.

    Dennis

  3. Gaya says:

    Hello ,

    This looks like an old post , but it was useful for me.
    I have the same requirement . To remotely run a local shell script with arguments .

    Your solution worked fine
    ssh -q $server 'cat | bash /dev/stdin' "$@" < "$realscript"

    But i have to run the script remotely with sudo . And that doesnt work

    ssh -q $server sudo -n su - oracle -c 'cat | bash /dev/stdin' "$@" < "$realscript"

    This doesnt seem to work . the script is not switiching to oracle user .

    Do you have any solutions ?

    • waldner says:

      On reasonably new systems you don't need the "cat | bash" kludge anymore, you can use bash directly. If you do that, you avoid the ssh double-evaluation issues and it works, see this example:

      $ cat script.sh 
      #!/bin/bash
      echo "running remotely, I am $(whoami)"
      $ ssh root@server sudo -n su - normaluser -c "bash /dev/stdin" < script.sh 
      running remotely, I am normaluser
      

      On the other hand, with "cat | bash" it doesn't work:

      $ ssh root@server sudo -n su - normaluser -c "cat | bash /dev/stdin" < script.sh 
      running remotely, I am root
      

      This is because remotely the part in double quotes is reevaluated and the result is as if you had run

      sudo -n su - normaluser -c cat | bash ...

      so only "cat" runs as "normaluser". If you insist on using this "cat | bash" syntax (neither recommended nor needed these days, as said) you have to add another level of protection, eg

      $ ssh root@server sudo -n su - normaluser -c '"cat | bash /dev/stdin"' < script.sh 
      running remotely, I am normaluser
      

      so that the "cat | bash" part remains one argument even after the first round of evaluation.

  4. vijay says:

    try this command :

    ssh hostname 'ksh -s arg1' < script.sh

  5. Joseph says:

    Hey Waldner,

    I ended up going a different route with the AIX hosts (it also does not have /dev/fd/0, but instead you have to go to /proc/??/fd/0, which is not as desirable in my eyes).

    Instead, I did it this way:

    In the script that will be run on the remote host, I created a few variables such as this:

    userID="%%USERID%%"
    primary_group="%%PRIGROUP%%"

    Then, on the local host, I used set to replace those %% values with the actual variable values:

    sed -e "s|%%USERID%%|ActualUserID|;s|%%PRIGROUP%%|ActualPrimaryGroup|" remote_script.sh | ssh RemoteHost "sh -s"

    (I used | as the field separator because my input will contain "/"s sometimes, but never |)

    It's not quite as clean as using actual parameters (for example, my script is used by our 1st level support people to add accounts to *nix boxes, and sometimes Secondary Groups aren't used), but it does the job pretty nicely!

    Please feel free to critique my method, I take constructive criticism well and always enjoy sharing my discoveries with others. Thanks again for this awesome guide, it has made my shells scripts 3fold more efficient.

    • waldner says:

      If /proc/self/fd is available, that could be a perfect replacement for /dev/fd or /dev/stdin. Otherwise, you have to find your PID which makes the code a bit more complex (although the complexity can be hidden in the wrapper script).

      The approach you ended up using avoids the need for /dev/stdin or whatever altogether by having the script not require command line arguments. It's essentially a simple templating system.

      And now that you mention it, I see that it's essentially equivalent to my second approach, and I also realize that we don't need /dev/stdin for that, as the arguments are not passed on the command line to the remote process. So it can indeed be changed to use "sh -s", or "bash -s", or whatever. And thus, you could use this revised second approach, as it doesn't require you to put macros in the script. (But of course I'm a fan of "do what works best for you", so if what you did works fine and does the job, by all means go for it!)

      I'll add a note to update the second approach. Thanks for sharing!

  6. Joseph says:

    Sorry for bringing up an old thread, but I had a couple things.

    First, I just wanted to say thanks a ton for this guide; it's helped solve quite a few issues I had with making some scripts scalable to a large environment (800+ servers).

    Second, is there a way to apply this particular method to AIX? Without the /dev/stdin device file present, I can't seem to think of a way to make this work. Hopefully this isn't a stupid question, I'm a relatively new administrator with only two years of *nix experience. Any help on this would be much appreciated.

    • waldner says:

      I'm afraid I'm not familiar with AIX. Does it have a /dev/fd/0, or something like that? You might be able to use that as a replacement (if it exists). Otherwise, you may try the scp approach, although it's the one I like less.

  7. psena says:

    Great one thanks!! But when i tried with interpreter perl. The script actually has something to echo out. When i run it nothing got outputted to the local host stdout. Eg the above printf Args....or a simple Hello which is there in the $realscript.

    • waldner says:

      It works for me using the runremote.sh script shown in the article and the following Perl script (script.pl):

      #!/usr/bin/perl

      print "hello world, the first argument is $ARGV[0]\n";

      $ runremote.sh script.pl perl user remotehost argument1
      hello world, the first argument is argument1

  8. Balou says:

    Very interesting and useful! Awesome