Need your recommendations for TCP Server/Client design

Team,

Let me begin by stating that I am still a Ruby novice, although I’ve
written
some simple appls (sudoku, TCP and UDP servers and other mundane appls)
with
the input of the team.

I worked with AIX and I support more than a hundred servers in a complex
and
secured??? environment.
Although some vendors have packages to perform “distributed” remote
support,
it is not allowed in my environment.
At first tried to design my own poor-man distributed package using what
is
allowed, ssh (port 22).
But this did not provide the flexibility to manage all the servers from
one
centralized location.

So, I went ahead and designed in Ruby a TCP Client/Server that works as
follows:

On every server I have a server listening on a predefined port.
The server gets started from the cron and every 10 minutes the cron
checks
to ensure that the server is running.

Lets say the client wants to execute a remote command like creating a
userid
on all servers or just checking paging or memory consumption, etc.
It sends a request to the server and the server executes the command and
returns the output to the client.

So the client can:

dshc -s hostname cmd
dshc -p full_path_of_a_file_with_list_of_servers
dhsc -a cmd (This version uses a file /etc/servers with the list of
all
servers)

I also have another client named dshp with the same flags as above and
which uses the same TCP server and which is listening on the same port.
The *dshp *program is used to push files to one or multiple or all
servers.

All the UNIX admin actually love the application. BTW, the dshc dshp
are
only executable by root.

However, although we are behind multiple firewalls (at least 6) a
scanning
tool detected the listener (TCP Server) and marked it as a security risk
on
a particular server.
I was asked and of course I complied, to shutdown the server on that
host.
I was also asked to redesign the tool adding a bit more security and
they
would allow it. They suggested “handshaking” between client and server,
the
initial comm or perhaps all comm should be encrypted. I was asked if
Ruby
has encryption
So here is where I am looking for some recommendations.

Reading a new book I just acquired I came across a package called
GServer.
I was wondering if this will be suitable for what I need.
Also, what type of encryption should I use?

They were talking something like:

Client sends connection request
Server replies with client’s hostname and time
Client sends back the *time *received from server together with the
command
which the client wants to execute at the remote server.
Server executes command if it is “happy” with the reply from the client.

Of course all communication must be ciphered.

Any suggestions will be greatly appreciated.

Thank you

Victor

On 23 May 2008, at 22:07, Victor R. wrote:

command
which the client wants to execute at the remote server.
Server executes command if it is “happy” with the reply from the
client.

Of course all communication must be ciphered.

Any suggestions will be greatly appreciated.

Go to the link in my sig and study the Semantic Networking
presentation. There’s an example in there of doing hybrid key crypto,
which is where you use a public key exchange to wrap a symmetric key
for establishing a connection and then use the symmetric key (with
much less processing overhead) to do the actual communication. On the
surface it’s probably overkill for your app, but the included source
code shows how GServer can be used for this kind of tool and as the
whole thing is based on OpenSSL it should be possible to conform with
any security policy you’re working under.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

On Fri, May 23, 2008 at 2:07 PM, Victor R. [email protected]
wrote:
[snip]

Server executes command if it is “happy” with the reply from the client.

Of course all communication must be ciphered.

Any suggestions will be greatly appreciated.

GServer is great. I’d use SSL for encryption. Require the client
app to authenticate to each server via a password. Probably easiest
to check against the root password.

The easiest way to add SSL to any application is to run stunnel on
each of your servers and have it proxy to your server listing on a
port on the loopback interface. That way your server doesn’t even
have to know SSL and it’s easy to debug. Whatever you do DO NOT
design your own crypto solution- notice the Debian guys couldn’t even
make a small “fix” without breaking ssh horribly.

On a side note, there are already free solutions for this sort of
thing… just search freshmeat.net.

Actually, ssh was my first choice and we use it for a short period until
it
became impractical.
Here is what I would like to do so you have a better understanding.
BTW, I’ve been playing with gserver this weekend but I still don’t get
what I need. I have problems on the receiving size. I don’t get all the
data
sent by the server.
That been said, here is what we do and the trouble we ran into.

Son facts:

  1. I am a Ruby neophyte but I don’t give up until I get what I need.*
    *My
    solutions are not always elegant but they do the job!
  2. ssh IS permitted.
  3. We are less than than 10 UNIX admins.
  4. We have over 100 AIX servers behind splitted among vlans and each
    behind different firewalls.
  5. My second solution TCP Server/Client worked very well. That’s
    until
    the security people discovered the listening port and the fact that
    my
    server, which was listening on EVERY server would execute any cmd.
    True, the
    client version only runs as root providing just a bit more security
    as
    you first have to log-in with your ID and then su to root.
  6. root can only be use via su.
  7. The solution I am looking for is to be used only by the sys
    admins.
  8. My first solution was using ssh as it is fully allowed by the
    sec
    group. Since authenticating would be impractical when executing a cmd
    on
    over 100 servers, we created public/private keys, which was a pain
    below the
    waist to distribute for everyone. Also, since in many instances we
    needed to
    run root commands, that was a real problem since we would have to
    either setup keys for root or implement* sudo*. That’s why I decided
    to
    create my own poor-man distributed remote command processor.

So, this is what I need to do.

Create an environment where a sys admin:

  1. log-in with her userid as we do daily and su to root.
  2. Execute a root cmd remotely on a server or multiple servers and
    receive the reply on the local server. We use one server as a the
    main
    server. Kind of a control work station.
  3. The communication between the main (local) server and the remote
    server(s) must be “secured” (ssh, ssl, encryption, whatever)

That’s in a nutshell!

All suggestions are greatly appreciated.

Thank you

Victor

On Mon, May 26, 2008 at 8:49 AM, Robert K.
[email protected]

2008/5/26 Victor R. [email protected]:

Actually, ssh was my first choice and we use it for a short period until it
became impractical.

From your posting it is not fully clear to me why it was “impractical”.

  1. root can only be use via su.

If ssh is allowed and several people should be allowed to become
“root” on all the machines, then you might as well allow root access
via ssh (probably with password auth disabled for improved security).

  1. The solution I am looking for is to be used only by the sys admins.
  2. My first solution was using ssh as it is fully allowed by the sec
    group. Since authenticating would be impractical when executing a cmd on
    over 100 servers, we created public/private keys, which was a pain below the
    waist to distribute for everyone.

Hm… Normally I would have expected home directories to be shared
via nfs in such a setup. Even if not, you could have automated this.

Also, since in many instances we needed to
run root commands, that was a real problem since we would have to
either setup keys for root or implement* sudo*. That’s why I decided to
create my own poor-man distributed remote command processor.

… which was identified as a security threat. :slight_smile: As always there is
the tradeoff between security and convenience.

So, this is what I need to do.

Create an environment where a sys admin:

  1. log-in with her userid as we do daily and su to root.
  2. Execute a root cmd remotely on a server or multiple servers and
    receive the reply on the local server. We use one server as a the main
    server. Kind of a control work station.
  3. The communication between the main (local) server and the remote
    server(s) must be “secured” (ssh, ssl, encryption, whatever)

I’d still use ssh or stunnel. You could even use ssh’s port
forwarding feature to connect to your remote command processor.

Kind regards

robert

On Mon, 26 May 2008, Victor R. wrote:

  1. ssh IS permitted.
  2. We are less than than 10 UNIX admins.
  3. We have over 100 AIX servers behind splitted among vlans and each
    behind different firewalls.

10 admins for only 100 servers? You’ve got it easy!

  1. My first solution was using ssh as it is fully allowed by the sec
    group. Since authenticating would be impractical when executing a cmd on
    over 100 servers, we created public/private keys, which was a pain below the
    waist to distribute for everyone. Also, since in many instances we needed to
    run root commands, that was a real problem since we would have to
    either setup keys for root or implement* sudo*. That’s why I decided to
    create my own poor-man distributed remote command processor.

You only need to distribute keys the hard way once. After that, you can
use the existing account to distribute more keys. In my last job, this
was known as the “abuse matt” option since I was the first person to
have
keys everywhere. Using sudo is a very good idea, I highly recommend you
install and configure it.

  1. log-in with her userid as we do daily and su to root.
  2. Execute a root cmd remotely on a server or multiple servers and
    receive the reply on the local server. We use one server as a the main
    server. Kind of a control work station.
  3. The communication between the main (local) server and the remote
    server(s) must be “secured” (ssh, ssl, encryption, whatever)

Take a look at gsh/ghosts. Written in perl, but it works very well.

– Matt
It’s not what I know that counts.
It’s what I can remember in time to use.

Thanks for the advise and info.

2008/5/24 Aaron T. [email protected]:

The easiest way to add SSL to any application is to run stunnel on
each of your servers and have it proxy to your server listing on a
port on the loopback interface. That way your server doesn’t even
have to know SSL and it’s easy to debug. Whatever you do DO NOT
design your own crypto solution- notice the Debian guys couldn’t even
make a small “fix” without breaking ssh horribly.

Definitively not cook your own!

On a side note, there are already free solutions for this sort of
thing… just search freshmeat.net.

Yet another alternative might be to just use ssh, i.e. replace your
demon with sshd and execute commands directly via ssh. This also
allows for secure file transfers (scp). dshc and dshp then become
wrapper for a ssh call. Note that with ssh-agent you don’t even have
to enter passwords for all the servers.

Kind regards

robert