VirtualBox Linux Host & Guests

Not directly on topic - but since I think there are people here who use
VirtualBox with Nginx I’ll ask. At first I thought I had nginx
configuration problems - but even if I do, I know my core issue is
VirtualBox/Linux networking.

I have a Linux host (AMD Opteron, 6-core, 16GB) with multiple VirtualBox
VM’s. After tests and experiments, I’m now using the Intel T Server
network interface for the guests (I WAS using the virtio - but
performance was horrible). The guests are bridged. I’m using Ubuntu
“Precise” for both host & guests and “Guest Extensions” is installed on
all guests.

My first question - is anyone using the virtio interface successfully?
And measured your performance to know it’s working?

But the real question - what if any adjustments have you made, either to
VirtualBox or to Linux, to achieve optimum network performance? Having
switched to the Intel interface my Linux guests have gotten much better

  • but still not where I think they should be (which to me means the
    virtualization should be transparent - guests should have the same speed
    as the host). I have Windows guests that are working perfectly - it’s
    only the Linux guests that have issues.


Daniel

Stick to ‘Intel PRO/1000 MT Server (82545EM)’ for every guest that can
use
it, even debian/nginx (6.0.6) blasts with it, I was using virtio for
some
time which does do what it suppose to do, lower cpu use and less
virtualization overhead, but the performance s*cks.

Posted at Nginx Forum:

On 1/29/2013 11:21 AM, itpp2012 wrote:

Stick to ‘Intel PRO/1000 MT Server (82545EM)’ for every guest that can use
it, even debian/nginx (6.0.6) blasts with it, I was using virtio for some
time which does do what it suppose to do, lower cpu use and less
virtualization overhead, but the performance s*cks.

Regarding the virtio - that’s exactly what I found.

Is there a reason to use the ‘MT’ vs the ‘T’ if I only need one
interface?


Daniel

On 1/29/2013 1:47 PM, itpp2012 wrote:

Daniel L. Miller Wrote:

Is there a reason to use the ‘MT’ vs the ‘T’ if I only need one
interface?
Other then better performance and a wider range of support by default, no.
You’re not going to see 2 nics anyway, the MT has a few more tuning options
then the T, I’ve done some extensive testing with all of them and their
several drivers with the MT having better use of the bandwidth and best
throughput (near realtime) even on a cluster with 120 vm’s.

And here I thought I was being clever by using the single-NIC model.
Let’s see how it works…

Hmm…don’t really see any difference but I’ll take your word for it and
leave it set with the MT.

But I’m still left with wondering why my Linux guest (now using the MT)
is slower than the Windows guest (still using T).

It’s acceptable speed…just not full throttle and it leaves me wanting
more. Are there any tweaks you’ve done to either the host or guest?


Daniel

It also depends on what the Host has as a nic and how its settings are
set,
ea. QoS and LLTP are useless protocols but take resources away, force
VM’s
to use 1gb FD, another thing is to stick to VBox 3.2, force the GA to
use
timesync only. Ea. use IPerf to push settings and boundaries.

Posted at Nginx Forum:

Daniel L. Miller Wrote:

Is there a reason to use the ‘MT’ vs the ‘T’ if I only need one
interface?

Other then better performance and a wider range of support by default,
no.
You’re not going to see 2 nics anyway, the MT has a few more tuning
options
then the T, I’ve done some extensive testing with all of them and their
several drivers with the MT having better use of the bandwidth and best
throughput (near realtime) even on a cluster with 120 vm’s.

Posted at Nginx Forum: