What hardware shuld I but to manage 100k of connections per

I’m doing a fotolog for a company.
It’s expected to manage about 100k of connections per day and about 1500
concurrents connections.
What hardware should I buy to manage those numbers of connections?
I’m planning to use slackware + lighttpd + fastcgi, but perhaps we will
have to reuse some scripts in php, in that case I would use slackware +
apache + mod_php + fastcgi

Thank you for your time

Rodrigo D.

Iplan Networks Datos Personales
[email protected] [email protected]
www.iplan.com.ar www.rorra.com.ar
5031-6303 15-5695-6027

I think one will need more information to do that kind of calculations.
What kind of application. I guess there will be some kind of
database, brand of the database, how large will it be, what kind of
queries, etc etc…

I also think you have to run some benchmarks on the application
before you can actually know.

My advice is to build the application as easy to scale as possible
and start with a good base. Then there will be no problems when you
grow, just throw in another server and you can manage some more
traffic…


Mathias Stjernstrom

I agree with you, the problem is that the contractor wants to negotiate
the hardware with the hosting company before the development stage, so
he is asking me what hardware he should negotiate.
I know that the site should manage about 100k connections per day and
1500 concurrent connections, and that the database will be in mysql and
the web server in lighttpd, although it could be apache, I also know
that the site will be something like fotolog.net, so I will have a lot
of iteraction with the db, the db will be huge (we expect about 10
millons of registered users) and that there would be a lot of iteraction
with rmagick to scale the images, there will be also a lot of statics
and adwords.

Thank you

Rodrigo D.

Iplan Networks Datos Personales
[email protected] [email protected]
www.iplan.com.ar www.rorra.com.ar
5031-6303 15-5695-6027

-----Mensaje original-----
De: [email protected]
[mailto:[email protected]] En nombre de Mathias
StjernströmEnviado el: Lunes, 27 de Febrero de 2006 12:32 p.m.
Para: [email protected]
Asunto: Re: [Rails] what hardware shuld I but to manage 100k of
connectionsper day?

I think one will need more information to do that kind of calculations.
What kind of application. I guess there will be some kind of
database, brand of the database, how large will it be, what kind of
queries, etc etc…

I also think you have to run some benchmarks on the application
before you can actually know.

My advice is to build the application as easy to scale as possible
and start with a good base. Then there will be no problems when you
grow, just throw in another server and you can manage some more
traffic…


Mathias Stjernstrom

On Feb 27, 2006, at 4:12 PM, Rodrigo D. wrote:


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

Those contractors :wink:
I have no personal experience with that big/complex rails app… yet.
So i guess i have to leave it to the rails gurus on the list.


Mathias Stjernstrom

On 2/27/06, Rodrigo D. [email protected] wrote:

and adwords.

Thank you

I manage a Java Servlet based app that sounds sort of similar except
that we don’t do any image manipulation on the server and were smaller
(We upload/download about 10,000 images a day. 500 concurrent
connections max.) We run that on a dual Xeon Class machine with 4 GB
ram with absolutely no problem. We normally have 90+ percent idle
time. We only really use the CPU when we are doing admin stuff likely
rebuilding an index.

I would think that the image manipulation issues are what is going to
eat up your cpu and the DB is going to eat up your RAM.

For scalability reasons you should logically seperate out your DB
server from your webserver/image manipulation server. If feasible I
would go with 3 seperate tiers of server. Then if a given tier gave
out you just add another computer to that tier.

Greg

Greg F.
The Norcross Group
Forensics for the 21st Century

Have the developer negotiate the hardware required at launch, and have
the contractor use SwitchTower to autoconfigure new hardware in the time
tested 3-tier configuration (web server, application server, DB server)
so that more capacity can be called in at the application server on
demand.

Lately, I’ve recommended to new clients to setup on multiple VPS systems
in this configuration. When things get hairy, the systems can easily be
scaled while plugging in new VPS instances, and even to multiple actual
systems in the future.

– -- Tom M.

Thank you for the info

Tom
> Lately, I’ve recommended to new clients to setup on multiple VPS
systems
> in this configuration. When things get hairy, the systems can
easily be
> scaled while plugging in new VPS instances, and even to multiple
actual
> systems in the future.

How does a 99$ dedicated server performance compare to 1, 2, or 3+ VPS
instances?

Alain

On Feb 27, 2006, at 10:18 AM, Alain R. wrote:

How does a 99$ dedicated server performance compare to 1, 2, or 3+
VPS instances?

Alain


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

I would go with a two or three server cluster of dedicated boxes.

You don’t want to run an app with 100k hits/day and lots of rmagick
processing on a vps as memory will be an issue. Three servers one for
web(lighttpd static pages), app(fcgi listeners on a box by
themselves) and db(just your mysql server). This is the way
switchtower does it and it works great. you can combine web and app
if you want to go with two servers but three is better.

Just some perspective, I run a rails site that gets 80,000 page

views/day. It all runs on a dual g5 xserve nor problem with lighttpd/
fcgi. Although it connects to a few legacy db’s on other machines so
not much db usage on the xserve itself. Its really hard to tell how
much hardware you will need before knowing more about the app and how
it will run. So since they want to get the hardware before hand, make
sure they get 3 servers. And get the biggest one for your db.

Cheers-

-Ezra Z.
Yakima Herald-Republic
WebMaster

509-577-7732
[email protected]

Ok, thank you.

I asked for two servers (amd 64 4 gb ram and two disk sata 180 gb
(raid)) and another server for the db, so three servers, two for web and
app processing and the last one for the db.
I also mentioned that I don’t know how rmagick will affect the
processing (the gd library is called by processor killer :P) so I said
that perhaps we will need another machine for image processing.


Rodrigo D.

Iplan Networks
[email protected]
www.iplan.com.ar
5031-6303

Datos Personales
[email protected]
www.rorra.com.ar
15-5695-6027

-----Mensaje original-----
De: [email protected]
[mailto:[email protected]] En nombre de Ezra
Zygmuntowicz
Enviado el: Lunes, 27 de Febrero de 2006 05:02 p.m.
Para: [email protected]
Asunto: Re: [Rails] Re: what hardware shuld I but to manage 100k
ofconnectionsper day?

On Feb 27, 2006, at 10:18 AM, Alain R. wrote:

How does a 99$ dedicated server performance compare to 1, 2, or 3+
VPS instances?

Alain


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

I would go with a two or three server cluster of dedicated

boxes.
You don’t want to run an app with 100k hits/day and lots of rmagick
processing on a vps as memory will be an issue. Three servers one for
web(lighttpd static pages), app(fcgi listeners on a box by
themselves) and db(just your mysql server). This is the way
switchtower does it and it works great. you can combine web and app
if you want to go with two servers but three is better.

Just some perspective, I run a rails site that gets 80,000 page

views/day. It all runs on a dual g5 xserve nor problem with lighttpd/
fcgi. Although it connects to a few legacy db’s on other machines so
not much db usage on the xserve itself. Its really hard to tell how
much hardware you will need before knowing more about the app and how
it will run. So since they want to get the hardware before hand, make
sure they get 3 servers. And get the biggest one for your db.

Cheers-

-Ezra Z.
Yakima Herald-Republic
WebMaster

509-577-7732
[email protected]


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

On Feb 27, 2006, at 10:18 AM, Alain R. wrote:

Lately, I’ve recommended to new clients to setup on multiple
VPS systems
in this configuration. When things get hairy, the systems can
easily be
scaled while plugging in new VPS instances, and even to
multiple actual
systems in the future.

How does a 99$ dedicated server performance compare to 1, 2, or 3+
VPS instances?

Depends on the size (and cost) of the VPS instances. :slight_smile:

The big differences are having more than one, so that a single
“server” going
down doesn’t sink the boat, and the ability to bring up additional
capacity
easily. With that in mind, it’s important to make sure your VPS
instances are
running on separate physical hosts to maximize the redundancy.

Everyone needs reliability, but a reliable dedicated box (to the
extent that
such a thing even exists) will cost more than a few VPS instances.


– Tom M.

On Feb 27, 2006, at 12:02 PM, Ezra Z. wrote:

I would go with a two or three server cluster of dedicated boxes.
You don’t want to run an app with 100k hits/day and lots of rmagick
processing on a vps as memory will be an issue.

How so, Ezra? VPS are not one size fits all, and with several VPS
instances, you can run small numbers of FCGI listeners per instance.

Three servers one for web(lighttpd static pages), app(fcgi
listeners on a box by themselves) and db(just your mysql server).
This is the way switchtower does it and it works great. you can
combine web and app if you want to go with two servers but three is
better.

I totally agree with the 3 tier setup, and it’s what I recommended.
There’s nothing about SwitchTower that demands 3 servers, or
dedicated servers.

With “just” three servers, as you’ve recommended, with one for web,
one for FCGI, and one for DB, you have a lot of hardware failure
exposure.

I’d rather have a higher number of instances, at a higher level of
utilization (since the instances are lower powered than a dedicated
box) and greater redundancy.

Its really hard to tell how much hardware you will need before
knowing more about the app and how it will run.

Bingo. Keep the configuration flexible, and make sure you have a way
to scale it in advance.

And get the biggest one for your db.

Agreed. In particular, the fastest disks in the DB.


– Tom M.

Tom-

I agree VPS servers can work good for distributing the load. But

when you can get a dedicated server for $69 with a nice amd processor
and 512MB or $79 for 1gig ram how does that compare to a $69 vps? or
even 2 $30 vps’s? Vps’s are nice for being able to snapshot the whole
system but the memory constraints are not cost effective as far as
what I have found so far. Just my observances.

Here is a alpha version of a server diagram for two dedicated

servers. One it a stage and one is production. they bnoth mirror each
other biut the stage is less busy so you can run remote fcgi’s on it
once the production box needs more room. And the stage box is there
as a fallback if the production box fails for some reason. Its just a
skecth but its working good so far:

http://brainspl.at/rails2servers.png

Cheers-
-Ezra

On Feb 27, 2006, at 12:30 PM, Tom M. wrote:

listeners on a box by themselves) and db(just your mysql server).
exposure.

[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

-Ezra Z.
Yakima Herald-Republic
WebMaster

509-577-7732
[email protected]

Tom
>> How does a 99$ dedicated server performance compare to 1, 2, or
3+ VPS
>> instances?
>
> Depends on the size (and cost) of the VPS instances. :slight_smile:

Try again :slight_smile:
one VPS vs one 99$ dedicated server.
two VPS vs two 99$ dedicated server.

By definition, with the VPS you’d be sharing CPU and memory with other
VPS instances (? dozens), and there would be paging/context switching
overhead. And VPS plans always indicate very small memory size
(rimuhosting plans start at 96MB, f.ex). What can you do with 96MB, or
even 192MB??

I understand VPS is great for budget-challenged projects with modest CPU
needs, but what is it worth once your need for power increases?

Part 2:
Once and once and once again, we need a way to measure and compare
hosting solutions performance: VPS vs shared vs dedicated, RAM size,
fcgi vs scgi vs cgi, etc…

Alain

On Feb 27, 2006, at 12:40 PM, Ezra Z. wrote:

I agree VPS servers can work good for distributing the load. But
when you can get a dedicated server for $69 with a nice amd
processor and 512MB or $79 for 1gig ram how does that compare to a
$69 vps? or even 2 $30 vps’s? Vps’s are nice for being able to
snapshot the whole system but the memory constraints are not cost
effective as far as what I have found so far. Just my observances.

http://www.westhost.com/vps.html

Note: I know nothing about these folks, and I’M NOT RECOMMENDING THEM!

However, there are a lot to choose from for under $10/month VPS.

So, I think it’s an open question…

Do you want one dedicated server, or 10 VPSs?

Here is a alpha version of a server diagram for two dedicated
servers. One it a stage and one is production. they bnoth mirror
each other biut the stage is less busy so you can run remote fcgi’s
on it once the production box needs more room. And the stage box is
there as a fallback if the production box fails for some reason.
Its just a skecth but its working good so far:

http://brainspl.at/rails2servers.png

That looks good, Ezra, though I’m not a fan of non-symmetrical
solutions.

One thing I really need to educate myself on is DB clusters. I’d love
a bunch of boxes sharing the DB load, and I know that this is a hard
problem.

That said, I know enough to know that two phased commits are a large
part of that difficulty, and that PostgreSQL has them in 8.1…

Here’s a quick sketch of how I’d like to use VPSs or dedicated servers
where that makes sense.

http://homepage.mac.com/tmornini/Logical_Application_Cluster.pdf


– Tom M.

On Feb 27, 2006, at 12:57 PM, Alain R. wrote:

>> How does a 99$ dedicated server performance compare to 1, 2,  

or 3+ VPS
>> instances?
>
> Depends on the size (and cost) of the VPS instances. :slight_smile:

Try again :slight_smile:
one VPS vs one 99$ dedicated server.
two VPS vs two 99$ dedicated server.

Sorry, Alain, I still cannot compare without (Part 2!) and a specific
VPS setup.

By definition, with the VPS you’d be sharing CPU and memory with
other VPS instances (? dozens), and there would be paging/context
switching overhead. And VPS plans always indicate very small
memory size (rimuhosting plans start at 96MB, f.ex). What can you
do with 96MB, or even 192MB??

In 96mb you can run lighty or an FCGI or Mongrel instance. In 192 you
can run MySQL or PostgreSQL.

What is clear to me is this:

  1. A lot of folks with dedicated servers use FAR less than 10%
    capacity.
  2. If a VPS is only 10% as powerful as, but costs 90% less than a
    dedicated
    server, then a VPS is a big win for people in situation #1.
  3. It’s a waste to pay for capacity you don’t need.
  4. It’s a shame to design a system that has no real scale plan.
    I’m not
    suggesting that the VPS solution automatically provides a plan,
    but
    since you know up front that you’ll LIKELY hit capacity limits
    (since
    the VPS is far less powerful) you’d better design in advance a
    plan to
    scale.
  5. A dedicated server, unless they’re configured as I’ve
    recommended the VPSs
    be configured, is FAR LESS RELIABLE than several VPSs running
    on different
    physical hosts. And, unless #2 is proven false, then the VPS
    solution gives
    you a much finer grained control over scalability, and similar
    fine grained
    control over expenses.

I understand VPS is great for budget-challenged projects with
modest CPU needs, but what is it worth once your need for power
increases?

http://homepage.mac.com/tmornini/Logical_Application_Cluster.pdf

Once and once and once again, we need a way to measure and compare
hosting solutions performance: VPS vs shared vs dedicated, RAM
size, fcgi vs scgi vs cgi, etc…

Yep.


– Tom M.

On Feb 27, 2006, at 1:47 PM, Jeremy K. wrote:

That said, I know enough to know that two phased commits are a large
part of that difficulty, and that PostgreSQL has them in 8.1…

Two-phased commits are necessary for transactions spanning multiple
databases. Most folks can safely ignore this scenario.

And transactions spanning multiple databases are necessary for a true
onlne read/write cluster, which is what I’m really after. :slight_smile:


– Tom M.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Feb 27, 2006, at 1:54 PM, Tom M. wrote:

And transactions spanning multiple databases are necessary for a
true onlne read/write cluster, which is what I’m really after. :slight_smile:

In a master/slave cluster only the master is mutable so a single
transaction is sufficient (thankfully :slight_smile:

jeremy
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (Darwin)

iD8DBQFEA3vlAQHALep9HFYRArTrAJ9aHDpqwJ1Wk7Wx2vvWpbWB/UiUdQCfRV/2
j4cQ3cIIVZzvX3ynmM7IYv0=
=xv0W
-----END PGP SIGNATURE-----

On Feb 27, 2006, at 2:23 PM, Jeremy K. wrote:

On Feb 27, 2006, at 1:54 PM, Tom M. wrote:

And transactions spanning multiple databases are necessary for a
true onlne read/write cluster, which is what I’m really after. :slight_smile:

In a master/slave cluster only the master is mutable so a single
transaction is sufficient (thankfully :slight_smile:

Oh, I understand that, but I don’t want master/slave…

I want peer/peer, fully clustered and load balanced.


– Tom M.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Feb 27, 2006, at 3:32 PM, Tom M. wrote:

I want peer/peer, fully clustered and load balanced.
This is getting quite off-topic… However! in a master/master setup
like ndb or pgcluster you’d also work with a single connection chosen
from equal peers (rather than from readers/writers) so single
transaction is still sufficient.

Essentially, you’ll never see two-phase commits unless you’re
integrating with a legacy database or working with a transactional
message queue. And thank your lucky stars it’s so: we haven’t even
touched on the distributed transaction manager necessary to
coordinate it all! Yech.

Best,
jeremy
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (Darwin)

iD8DBQFEA45YAQHALep9HFYRAh2cAJ9hgeRmScx1HqeZU7X1C2vtHNZZLACg3zL2
VRn7W22xLpSlyieki4eGqVM=
=E9tu
-----END PGP SIGNATURE-----