Windows Compilation Madness

No point in keeping this discussion in the Zed thread…

On Jan 2, 2008, at 10:55 AM, M. Edward (Ed) Borasky wrote:

another
copy.

I could be wrong, but I sense a deeper question in Thurfir’s message.

What is it about Linux (or *BSD or Mac OS X) that avoids all
the compilation problems that arise in the Windows environment?

Can a Windows expert confirm my understanding that:

Unlink Linux (and *BSD, Mac OS X…), Windows does not come packaged
with a development environment and in fact there are multiple
mutually-incompatible development environments (Cygwin, MinGW, VC6,
VC8). For the most part you can not mix-and-match object code (dlls)
created by these different environments with the biggest problem
being that there is no common memory allocation library.

I assume this is why some Windows software comes distributed with its
own collection of ‘standard’ dlls compiled in the same development
environment as the underlying application.

The end result is that the least-common-denominator for all these
environments is source code, which still requires the ‘end-user’ to
install a development environment of some sort and to manage the
compile/link process for Ruby and for any and all 3rd party libraries/
gems/extensions that they need.

I also gather that each of those environments has very different
build utilities (the Windows equivalent of make, config, autoconf,
and so on). For an extension writer, the problem is that they can’t
really know what the build process is going to look like on an
arbitrary Windows system making it very difficult to even distribute
source packages.

So am I hot or cold in understanding the Windows situation?

On Jan 2, 2008, at 9:02 AM, Gary W. wrote:

Unlink Linux (and *BSD, Mac OS X…), Windows does not come packaged
with a development environment and in fact there are multiple
mutually-incompatible development environments (Cygwin, MinGW, VC6,
VC8). For the most part you can not mix-and-match object code (dlls)
created by these different environments with the biggest problem
being that there is no common memory allocation library.

Confirmed that Windows does not come with a dev environment. But
neither do *nix. They rely on your text editor of choice and gcc
(usually). IIRC, there are command-line tools that come with the .Net
framework if you know how to use them. csc.exe is the c# compiler,
although, IIRC, that compiles to the CLR.

I assume this is why some Windows software comes distributed with
its own collection of ‘standard’ dlls compiled in the same
development environment as the underlying application.

No. Most software comes with DLLs because they load lazily, providing
the perception of quicker application startup. Also, DLLs can be
upgraded without a reinstallation of the entire application and shared
among other applications on the system. For example, MSXML and MSHTML
are packaged as DLLs and can be readily used by anyone who wants them.

The end result is that the least-common-denominator for all these
environments is source code, which still requires the ‘end-user’ to
install a development environment of some sort and to manage the
compile/link process for Ruby and for any and all 3rd party
libraries/gems/extensions that they need.

Yes, although I might disagree somewhat with the LCD issue. Many who
package DLLs simply version them as new compiler versions come out so
the correct version can be imported.

I also gather that each of those environments has very different
build utilities (the Windows equivalent of make, config, autoconf,
and so on). For an extension writer, the problem is that they can’t
really know what the build process is going to look like on an
arbitrary Windows system making it very difficult to even distribute
source packages.

No. It’s so totally unlike *nix that you don’t get squat except nmake,
which is a capable version of make. It’s been around since the early
90s. Configuration is less of an issue than on *nix because you don’t
have to ask as many questions about the distro – there are givens
about a Windows installation that you don’t have on *nix. That
essentially takes autoconf and config out of the mix.

Look, Windows dev is a whole different world. The knowledge transfers,
but not absolutely 1:1.

So am I hot or cold in understanding the Windows situation?

Lukewarm. The CLR is one of the sticking points. There doesn’t seem to
be a c or c++ compiler that ship standard with Windows that compile to
native code. That’s kind of a bummer.

On 2 Jan 2008, at 17:46, s.ross wrote:

Confirmed that Windows does not come with a dev environment. But
providing the perception of quicker application startup. Also, DLLs
can be upgraded without a reinstallation of the entire application
and shared among other applications on the system. For example,
MSXML and MSHTML are packaged as DLLs and can be readily used by
anyone who wants them.

Fair enough for dlls in general, but loads of windows app ship with
their own copy of the c runtime dll and things like that, which is
what I think Gary was talking about (and then you’re not sharing dlls
at all).

Fred

Gary W. wrote:

Unlink Linux (and *BSD, Mac OS X…), Windows does not come packaged
with a development environment and in fact there are multiple
mutually-incompatible development environments (Cygwin, MinGW, VC6,
VC8). For the most part you can not mix-and-match object code (dlls)
created by these different environments with the biggest problem being
that there is no common memory allocation library.

I assume this is why some Windows software comes distributed with its
own collection of ‘standard’ dlls compiled in the same development
environment as the underlying application.

DLLs is a Windows standard file format (pretty much like an executable
file with an exported table of names) and can most often be used between
compilers. The thing that I think differ the most in DLLs is how C++
classes are represented, here the different compilers have different
incompatible solutions. Another thing that can differ is stack calling
conventions, but the Windows SDK has a couple of predefined function
prefixes that are compatible across compilers.

Normally, you use an implib tool from your Windows compiler to create a
small lib file that maps to the DLL and are linked statically to your
application. Since DLLs are dynamic, they can be replaced with later
versions as long as the required exported names are still there.

Best regards,

Jari W.

On Jan 2, 2008, at 10:10 AM, Frederick C. wrote:

Fair enough for dlls in general, but loads of windows app ship with
their own copy of the c runtime dll and things like that, which is
what I think Gary was talking about (and then you’re not sharing
dlls at all).

True enough, but a Windows developer makes a conscious decision
whether to ship a statically-linked or dynamically linked executable.
Their distributable is far smaller if they ship with the runtimes
dynamically linked, but there is some concern about having the
runtimes stomped on by somebody else. Bear in mind that when linking
in Windows, external references to DLLs refer to specific versions of
the DLL’s import library, and thus to a specific version of the DLL.
This should offer some protection against getting stomped.

Philosophies vary on static versus dynamic linking, but knowing you
have the option means you can choose static and control your own
destiny.

Full disclosure: I don’t run Windows except when necessary. I never
develop on Windows. I’ve forgotten most of what I know about Windows.
BUT… I worked on the MSC/C++ product teams in the olden days when
product was shrink-wrapped an was delivered by ox cart and people
couldn’t really spell IDE. Please don’t hold it against me :slight_smile:

The products are solid and the compiler team was always as well-
intentioned as anyone you know in the open source community. There is
just a very corporate face on the whole thing.

I also gather that each of those environments has very different
build utilities (the Windows equivalent of make, config, autoconf,
and so on).

rbconfig.rb stores information about the tools that were used to
compile ruby. extconf.rb uses this information via mkmf.rb to compile
extensions. AFAIK trouble starts if you try to compile an extension
with a different compiler or if the extension relies on libraries that
were not ported to windows.

In order to clarify the cygwin side: cygwin includes most of the
libraries available under linux if you use the posix-emulation layer
(or you can easily compile them yourself). But with the emulation
enabled you cannot link with a non-cygwin app and cygwin apps use non-
windows path conventions which causes some difficulties when calling
these programs form normal windows apps. In my experience, cygwin
works quite well though. If you want to call a cygwin app from a
windows app, you could always use a wrapper skript to work around path-
convention issues. So, cygwin can compile most extensions for use with
cygwin ruby but it cannot be used right away to compile stuff for the
pure-windows one-click ruby installer (please somebody correct me if
I’m wrong).

Not quite (close, though). Cygwin is and always will be incompatible
with real Windows software because of its dependence on cygwin1.dll.

One can use cygwin gcc to compile non-cygwin apps (using -mno-cygwin)
that use a version of mingw which ships with cygwin. But one cannot
use cygwin libraries then.

On Jan 2, 2008 12:02 PM, Gary W. [email protected] wrote:

other Ruby programs can then reference it without having to load
with a development environment and in fact there are multiple
mutually-incompatible development environments (Cygwin, MinGW, VC6,
VC8). For the most part you can not mix-and-match object code (dlls)
created by these different environments with the biggest problem
being that there is no common memory allocation library.

Not quite (close, though). Cygwin is and always will be incompatible
with real Windows software because of its dependence on cygwin1.dll.
MinGW uses the VC6 runtime libraries, but the compilers (gcc vs. VC6)
define things slightly differently. With care, you can mix and match
VC6 and MinGW builds.

It’s where things differ with the version of the runtime libraries
that things get hairy. MSVCRT and its upversions (MSVCRT7, MSVCRT71,
MSVCRT8) are all the equivalent of libc (the C runtime). Each has its
own implementation of malloc (with its own heap table), its own file
handle table, etc.

Linux and Unixes don’t run into this problem because they don’t
typically have more than one version of the C runtime installed at one
time. When (and if) they do, it’s Very Difficult to get software that
you build yourself to use a different C runtime than the default C
runtime. It’s important to note that Solaris and AIX, at least, will
never run into this problem. Their C runtimes are…special. I’m more
familiar with the Solaris approach which provides a C runtime that
seems to have multiple implementations baked in marked with version
tags. I’m not sure how they do it, or if my understanding is right,
but that’s what it seemed to me. OS X probably does something similar.

I assume this is why some Windows software comes distributed with its
own collection of ‘standard’ dlls compiled in the same development
environment as the underlying application.

Again, not quite. That’s laziness as much as anything, but it’s also
because Microsoft only guarantees that VC6 DLLs are installed. After
that, if you wanted them installed in “standard” locations, you had to
install the software as an Administrator so it could put the runtime
in the right place. It’s often easier to just pack the C runtime with
your software than to worry about crap like that.

I also gather that each of those environments has very different
build utilities (the Windows equivalent of make, config, autoconf,
and so on). For an extension writer, the problem is that they can’t
really know what the build process is going to look like on an
arbitrary Windows system making it very difficult to even distribute
source packages.

That’s close. The average Windows system won’t have anything like
autoconf, though, and Ruby doesn’t offer anything to help with that.
(That, and autoconf doesn’t know how to deal with non-gcc on Windows,
it seems.)

-austin

On 3 ene, 06:41, Marc H. [email protected] wrote:

The MinGW guys should offer a complete package as .exe downloader with
all required tools for compiling (gcc, bison, a c-lib, flex, autoconf,
automake etc…).

Why they “should”?

a ‘c-lib’? mmm, you actually don’t know what are you talking about,
right? Windows C-lib is MSVCRT. (period)

Er… actually they do, but noone really know you to use it:

  • Download MinGW-5.1.3.exe, install Current or candidate packages
  • Download and install msys-1.0.10.exe
  • Donwload and install msysDTK-1.0.10.exe

That will give you a *nix like environment for MinGW!

Happy? Hope so, so, now the problems:

Ruby is build with VC6, that brings:

mkmf.rb (used for extconf) will not work without tweaking, since
rbconfig store sensible information about the compiler used to build
ruby, and mingw wouldn’t match.

Is just me or this thread is a deja-vu? Maybe I’m getting older, but
I’m getting tired of repeating myself about this.

(Heh, not enough mood, 8:30am here).

Later,

The MinGW guys should offer a complete package as .exe downloader with
all required tools for compiling (gcc, bison, a c-lib, flex, autoconf,
automake etc…).

Some of this has been said, some of it hasn’t. I’m also not 100% on
everything here, but please discuss. This is poorly understood and
needs documenting in order to really progress on the issue. Too many
people I’ve heard saying “bah windows is too much of a pain” - but go
onto rubyforge, and look at the OCI download numbers. Whatever you’re
feelings, that’s far from irrelevant. The fact that only a handful of
rubyists are actually capable of building ruby + all of ext/* is quite
sickening, really (at least to me).

On 2 Jan 2008, at 13:02, Gary W. wrote:

No point in keeping this discussion in the Zed thread…

Absolutely. I have been thinking about raising this topic for a long
time, but I’m also working on fixing some parts of it, so I was
holding off until I have time to complete that work. I guess though,
as it’s here, now is as good a time as any. I’ve approached a few
people for assistance on the ideas in the past, but we’re all busy
trying to survive ourselves to really get things going (yet). Even
with a lack of time, if some people want to group together, I’d be
more than happy to donate what I can to getting a more stable
environment up on the Windows platform - and that includes code I’ve
written against the issues.

I could be wrong, but I sense a deeper question in Thurfir’s message.

What is it about Linux (or *BSD or Mac OS X) that avoids all
the compilation problems that arise in the Windows environment?

Can a Windows expert confirm my understanding that:

I am by no means an expert, but I have been studying this recently,
and have come to some understanding of the issues. I’m not much of a C/
C++ programmer, but I have done a lot of building, and sent /
committed patch files to build processes of quite a number of
projects. I also actively use Windows, *nix (in many flavours), and
more recently OS X.

Unlink Linux (and *BSD, Mac OS X…), Windows does not come packaged
with a development environment and in fact there are multiple
mutually-incompatible development environments (Cygwin, MinGW, VC6,
VC8). For the most part you can not mix-and-match object code (dlls)
created by these different environments with the biggest problem
being that there is no common memory allocation library.

Well, actually OS X doesn’t come “pre-packaged” either, but the build
chain there is based on Open Source stuff from the *nix world, and for
ruby, most of the pre-requisites are already available as libs
(readline, curses, openssl, zlib, etc). Once you install Xcode, you
pretty much get a production grade compiler setup for Ruby that will
Just Work.

Under Windows, you can compile 1.8.6 trivially on any of the compilers
mentioned. Ruby herself, in 1.8.6 is a very very clean compilation and
Just Works. The ext/* stuff however, is a different story. OpenSSL
compilation on Windows is actually a real pain - you need to get a
copy of perl if you want to use the MS compilers (if you’re using the
pre-packaged build chain that is), and under MinGW you have to rebuild
the directory structure by hand, as the symlinks in the tarball will
not expand properly. I don’t know why the OpenSSL devs ship their
build packages like this, I find it terribly annoying, but it is what
it is.

As for readline, well it’s broken on win32, and several of the builds
out there are archaic. GnuWin32 is useful - but again, stupid choices
make life difficult, such as the pre-compiled binaries being hardcoded
(!!!) to particular locations on disk. It’s just insane - and this
isn’t the fault of the operating system - this is third party
craziness. Other libs have similar issues, I won’t got through the
whole ext/* stack now. zlib is easier. Getting just rubygems + ruby up
and running is quite easy, if you use one of the many pre-compiled
openssl builds out there and build + link against that.

As for the MSVCRT issues, I have read many many mixed reports. The
summary is roughly like this, AFAIK:

  • You can mix msvcrt versions safely if you don’t pass structs /
    pointers across the boundaries for which the api has changed. In
    reality, for some applications, this is unworkable - however there are
    several production ready gems out there that are working just fine on
    the OCI (MSVCRT) which are compiled with VS 2005 (MSVCR7/8 (I can’t
    remember the exact version -> VS pairings, sorry).
  • Due to the above, and several apps having issues with it - as well
    as the bloat it causes, no one likes mixing up MSVCR versions. That’s
    fair enough.
  • I have been told by several MS developers (whether or not this is
    true, someone with more knowledge of the linking strategies and
    A[PB]Is will need to confirm - that it’s possible to make these things
    safe using particularly compiled .lib files. I’ve also heard stories
    of shims being put in place, but they may come with a performance
    overhead, and are complex low level things in this domain (I suspect
    also quite specific to particular problems).
  • MinGW and MSVCRT (VS 6) are compatible.
  • Performance on MinGW is up about 30% over VS6, for 1.8.6. The gap
    is closing in 1.9, and isn’t so bad for later MS compilers.
  • I haven’t tried Intels compiler, and someone probably should,
    although it’s buyware.
  • MinGW has cross-compiler capabilities, and that might be worth
    noting here. I know there are a couple of gems out there that are
    built like this. If possible, it might make sense to make something
    available for building pre-compiled gems like this. The problem is, C
    toolchains and dependancies are such a PITA very often.
  • In other build environments, I have built software that doesn’t
    link against the standard runtimes locally at all. I need to become
    more familiar with the build tools and ABI to say more than that though.
  • There are bugs in the 1.8.6 ext/* makefiles which means that
    subshells spawned seem to use different relative / absolute paths in
    different ways. In order to compile everything off of a single ‘make’
    under mingw, I found I had to add -I [absolute_path] AND -I
    [relative_path]. I think the same errors have thrown quite a few
    people who later just gave up. IIRC it’s openssl vs. readline that
    don’t compile properly - and those are a harder two anyway, so people
    get angry and give up very rapidly.
  • The version of bison in MinGW doesn’t build parse.c properly
    anymore :frowning:

There is absolutely nothing stopping the ruby community getting into
MinGW and helping everyone out by providing better build chains for
them based on ruby apps and capabilities. Seriously, rubys core
interpreter (1.8.6) builds really easily on top of the standard lib
under mingw and other OSes, and isn’t packed up as share/buy ware like
the most commonly use perl / python builds on win32 - that’s important
to some people (whatever you might think of that as a merit yourself).
More than this though, a dose of pragmatic powerful scripting
capability could go a long way in helping MinGW move forward, so if
anyone is interested, take a serious look at a project like this. Even
just bootstrapping the default required mingwPORT.sh files would go a
long way into making the MinGW build process simpler.

What actually needs “fixing” is we need someone to release a build
chain for the build chains (for ruby herself). I’m working on
something, but I have not had time to finish it, as I’m in a busy
startup at present, so it’s only good for internal use right now. I
want to try and get into working with the rubygems folks once I’m more
familiar with the chains and issues. Also interesting to join forces
with might be the multiruby and related projects (there’s a build-farm
type project somewhere too but I can’t remember the details, Ryan?)

I assume this is why some Windows software comes distributed with
its own collection of ‘standard’ dlls compiled in the same
development environment as the underlying application.

Well, XP comes with two really, a native api, and the MSVCRT. As you
install software, at some point you’ll get an app with newer MSVCR
versions being installed from the re-distributable. There are
disgusting rumors about GPL violations and other complex licensing
issues with linking against their newer libs - but this is just so
totally anti-pragmatic. I found the Wireshark toolchain documentation
very good - and something we should aspire too as a base minimum.

The end result is that the least-common-denominator for all these
environments is source code, which still requires the ‘end-user’ to
install a development environment of some sort and to manage the
compile/link process for Ruby and for any and all 3rd party
libraries/gems/extensions that they need.

This is actually true of all the operating systems. It’s very common
to have it there already on *nix, and many developers install Xcode
very early on in their configuration of a new OSX box, as this is
required for fink / macports / general open source software
compilation. This would be very similar on win32 if only mingw’s build
chain wasn’t so nasty at this point in time.

I also gather that each of those environments has very different
build utilities (the Windows equivalent of make, config, autoconf,
and so on). For an extension writer, the problem is that they can’t
really know what the build process is going to look like on an
arbitrary Windows system making it very difficult to

Well, sure, but most of that actually works for a C app, at least,
there are good example builds around that people can “borrow”. Going
into C++ is a different matter (there are more compiler and api
differences there - i’m also less familiar though), but lets start
with the most common first.

On Jan 3, 3:14 pm, James T. [email protected] wrote:

Some of this has been said, some of it hasn’t. I’m also not 100% on
everything here, but please discuss.

Most of this topic have been covered by previous ruby-talk and ruby-
core posts

This is poorly understood and
needs documenting in order to really progress on the issue.

The thing is that this was already explained too many times, exposing
to library developers and average users where the problems resides,
what can be done, what couldn’t be done and proposing some solutions
for it.

Too many
people I’ve heard saying “bah windows is too much of a pain” - but go
onto rubyforge, and look at the OCI download numbers. Whatever you’re
feelings, that’s far from irrelevant. The fact that only a handful of
rubyists are actually capable of building ruby + all of ext/* is quite
sickening, really (at least to me).

For me too, but every time I sent a message to get feedback, three
things happen:

A) don’t get any reply (quite common)
B) “move to a real OS” not so funny comments wasting part of my time
invested (free) on ruby.
C) another long thread like this with all the problems windows users
face, the C-lib (MSVCRT) and all that, all over again.

No matter if I raise this discussion here (ruby-talk) or ruby-core,
almost every time get the same “feedback”.

On Jan 3, 3:14 pm, James T. [email protected] wrote:

trying to survive ourselves to really get things going (yet). Even
with a lack of time, if some people want to group together, I’d be
more than happy to donate what I can to getting a more stable
environment up on the Windows platform - and that includes code I’ve
written against the issues.

Is funny you mention this James, it seems you made some progress since
you contacted me back in november.

it is.

I agree 100% on this.

just bootstrapping the default required mingwPORT.sh files would go a
long way into making the MinGW build process simpler.

I had more success downloading sources and compile by hand than using
mingwPORT. After all, there is no way to automate mingwPORT execution.

What actually needs “fixing” is we need someone to release a build
chain for the build chains (for ruby herself). I’m working on
something, but I have not had time to finish it, as I’m in a busy
startup at present, so it’s only good for internal use right now. I
want to try and get into working with the rubygems folks once I’m more
familiar with the chains and issues. Also interesting to join forces
with might be the multiruby and related projects (there’s a build-farm
type project somewhere too but I can’t remember the details, Ryan?)

Too bad we are overlapping each other on this. I though we can
collaborate since both aimed the same goal, but it seems isn’t.

Roger P. suggested me to bundle MinGW in a gem…:

  1. a 8MB gem (!!!)
  2. because mingw is inside the gem, will not be easy hook it into PATH
  3. that didn’t solve the rbconfig issues.

This is actually true of all the operating systems. It’s very common
to have it there already on *nix, and many developers install Xcode
very early on in their configuration of a new OSX box, as this is
required for fink / macports / general open source software
compilation. This would be very similar on win32 if only mingw’s build
chain wasn’t so nasty at this point in time.

MinGW build chain lack some docs, but is not as complex as anyone can
describe it.
The main issue isn’t MinGW or Ruby itself, but the extensions bundled
into ruby source code, which are part of the whole ruby build process.
Also the dependencies used to build these extensions.

Fixing the dependencies will solve the extensions issues.

Regards,

On Jan 3, 5:22 pm, Joel VanderWerf [email protected] wrote:

the user’s computer.

Err… part of the numbered list are the problems with that approach:

  • 8MB for a gem will be problematic, mostly on how open-uri works (and
    how rubygems use it).

  • There is no easy way to hook MinGW inside a gem, just adding the
    path will exceed the PATH environment variable size.

  • Still, Ruby with with VC6 need VC6 to compile, or a hacked rbcofnig
    file, which need tweaks “per installation”, since noone install ruby
    in the same path, drive, etc.

Luis L. wrote:

Roger P. suggested me to bundle MinGW in a gem…:

  1. a 8MB gem (!!!)
  2. because mingw is inside the gem, will not be easy hook it into PATH
  3. that didn’t solve the rbconfig issues.

That would be great, if it can be made to work. I have ruby programs
that require C compilation to work (code is generated based on user
input in a ruby-based DSL). The hard part is setting up the compiler on
the user’s computer.

The biggest issue is that although this has been discussed many times
over, it’s yet to be solved - in my mind, this means the discussion is
still very much open. The OCI has been massively successful, and I am
grateful, as I started there myself. It is not a real build though,
it’s a package set, and many of us are becoming increasingly dependent
on a real build process for the whole stack. It’s also important to
note that this is far wider than a Ruby issue, this is about Open
Source on Windows in general. (Anyone please note that this doesn’t
mean however, that it’s not a ruby issue - someone needs to solve
this, and who better than the ruby community for just getting it done,
right?)

On 3 Jan 2008, at 14:55, Luis L. wrote:

For me too, but every time I sent a message to get feedback, three
things happen:

A) don’t get any reply (quite common)
B) “move to a real OS” not so funny comments wasting part of my time
invested (free) on ruby.
C) another long thread like this with all the problems windows users
face, the C-lib (MSVCRT) and all that, all over again.

Same here. Moreover, I think there’s more than just you, me and Roger
who’ve been solving this issue on our own in order to deal with the
problems. There’s someone around in #ruby-lang, I don’t remember who,
that’s building everything on VS 2005 and 2008. That must be a
nightmare for some apps. Maybe not so bad for the ‘normal’ rails stacks.

No matter if I raise this discussion here (ruby-talk) or ruby-core,
almost every time get the same “feedback”.

Well, from my research actually outside of the ruby world, it seems
realistically that a lot of people aren’t certain about what issues
will arise with these MSVCR differences. The MSDN docs are pretty
clear on what can cause problems, and indeed some software does
follow what I personally consider evil implementations that cause
issues - but more importantly, most people can’t tell you what inside
the stdlib(s), will be an issue. Partly because some of this stdlib is
closed source, and monitoring external calls alone, doesn’t cut it, so
it’s not even easy to trace with a tool like IDA. Some of the
PostgreSQL team actually know a lot more, but it’s been a long time
since I’ve been anywhere near them.

By empirical evidence from digging around, I find it’s also common
that people stitch together partial solutions to toolchain problems
all too often in this environment. Indeed most people using MinGW do
just that - I can say this with some confidence as no one has ever
addressed build procedures and sub shell paths before - where as I
have seen more than one instance of people running the makefiles
individually, after the main ruby make.

Anyway, I’m not here to insult people, I’m here with a goal of moving
toward doing things properly. In my opinion, that’s building full
stacks from a single build chain, in a single standard toolset, with
minimal commands. Ideally without braking platform standards or adding
external patches, unless they’ve been forwarded and accepted upstream.
I think most of us agree on this in principle.

Moving on…

I had more success downloading sources and compile by hand than using
mingwPORT. After all, there is no way to automate mingwPORT execution.

For some builds yes, I’ve had a few fail badly on me, and some are
more complex (e.g. the cyclic dependancies i.e. gettext + iconv IIRC)

I have had some of the ports building as part of a build script, but
it’s complex and environment setup and subshell problems are a real
pain. My dollar for a truly working ‘export’.

Is funny you mention this James, it seems you made some progress since
you contacted me back in november.

Yes, with varying levels of automation. I have 3 sets of scripts
currently, in several flavours supporting building in COMSPEC, MSYS,
and Rake independently, with varying levels of success. I’ve also been
developing off of E:, and this has issues in some places that I still
need to produce and submit patches for. Lack of time has prevented me
from producing solid documentation to date, although I also have some
draft blog posts coming along too, that really would be better in a
wiki. I wonder if any of the implementors have any opinions on this,
wrt strategic documentation positioning. Ofc the JRuby team don’t
quite have the same issue, but they may have some advice.

Too bad we are overlapping each other on this. I though we can
collaborate since both aimed the same goal, but it seems isn’t.

Well, we should. There’s no reason why a single build chain can’t
solve all our issues, if we help with patches and sensible designs.
I’m getting relatively close, we’ll compare notes ASAP.

Roger P. suggested me to bundle MinGW in a gem…:

  1. a 8MB gem (!!!)

Pre-built gems must account for a higher percentage of bandwidth on
rubyforge, I mean the OCI is pretty big as is. 8mb isn’t too bad by
comparison, but, I’m not sure this is the right solution for many
people.

  1. because mingw is inside the gem, will not be easy hook it into PATH

Still trying to decide about PATH. Environment preparation is easy
under the current methodology of launching ruby etc under Windows, as
before, I’m looking into rubygems in more detail wrt this problem.
Possibly even some of the rubigen stuff would help, but I’ve gotta
spend some time researching my ideas. There’s also the idea of
emulating execve(7?), which is yet another branch of possiblity that
I’ve got stabs for. (albeit cheating by adding false functionality to
a pure-ruby ‘env’ that runs all extension-less files).

  1. that didn’t solve the rbconfig issues.

Falsifying a build is an odd thing, again rubigen type generation
might make some sense, who knows. Maybe we want to look more closely
at the build stack, or maybe we can deal with the problems purely with
upstream patches. I suspect all of the above might eventually be
required in some ways.

Personally, I think the important first stage is to release an
automated build-chain build-chain. Once we can automate the whole
stack, we can release it at several independent levels of compilation.

On 3 Jan 2008, at 15:22, Joel VanderWerf wrote:

That would be great, if it can be made to work. I have ruby programs
that require C compilation to work (code is generated based on user
input in a ruby-based DSL). The hard part is setting up the compiler
on the user’s computer.

And this is why… :slight_smile:

We’re also more likely to get help from individuals if the entire
build chain setup procedure can be replicated in more environments
than our development machines. One of the big issues with C/C++ build
chains is the ‘speciality’ of a developers machine. This happens oh so
frequently, even in a professional project, and it makes me angry
because it’s whoever not documenting some dependancy, or not being
aware of what their using. Sure, it’s not easy, but it’s our job!


On 3 Jan 2008, at 15:30, Luis L. wrote:

Still, Ruby with with VC6 need VC6 to compile, or a hacked rbcofnig
file, which need tweaks “per installation”, since noone install ruby
in the same path, drive, etc.

Right. I don’t think this is the way to go. The OCI and garbage
collect builds should remain as the authors intend them. This will be
the standard for a long time yet, and to impose would make life worse
I think.

The build scripts can be fixed externally, and I don’t mean patching
something installed locally. We should provide full-stack builds for
people to use, and provide options for distribution packaging - it’s
not as large or insurmountable as it may feel to some.

A 7z self extracting exe can package up ruby and the stdlib into a 4mb
exe. This is another little experimental branch I’ve been playing with

  • originally built to provide a few random tools like a standalone
    port forwarder internally in our environments when it was required for
    a telco, but it turns out to be an interesting stab against a couple
    of installer / distribution ideas. It actually turned out to be as
    fast, and as tidy as rubyscript2exe, for what that pattern is worth.

Anyway I’m in danger of digressing, and I have other things to do
today… :slight_smile:

When I am back in my home country and have access to my build chain,
we should have a more solid discussion than we did some months ago,
probably off-list. :slight_smile:

Anyone who wishes to join or even just state an opinion or desire,
PLEASE come and join us. I’ll set something up as a central place, or
hopefully someone else might, and announce it here.

BTW, as far as broken symlinks go there is a pure ruby implementation
of tar (somewhere on rubyforge) which could be used to extract the
symlinks any way deemed appropriate. Or the standard tar patched to
extract them. Directories symlinked so that stuff appears in multiple
places could be a problem, though.

And there is probably symlink support Vista if somebody wants to jump
that bandwagon.

Thanks

Michal

On 3 Jan 2008, at 16:37, Michal S. wrote:

BTW, as far as broken symlinks go there is a pure ruby implementation
of tar (somewhere on rubyforge) which could be used to extract the
symlinks any way deemed appropriate. Or the standard tar patched to
extract them. Directories symlinked so that stuff appears in multiple
places could be a problem, though.

And there is probably symlink support Vista if somebody wants to jump
that bandwagon.

http://support.microsoft.com/kb/205524

Sadly, the links in question are file links (so junctions are out),
and more than 32 of them (so vista symlinks are out), so any linking
implementations on win32 are still no good.

Copy will work more persistently :slight_smile:

Thanks for the suggestions though.

On 3 Jan 2008, at 17:04, Gary W. wrote:

I’m not a Windows programmer and I didn’t really want to cause a
rehash of all the discussion, I’ve seen before. I was just trying
to get the ‘big picture’ to understand why Windows is so problematic
in this area.

No problem, Luis started a thread to discuss the other areas, and I’ll
be joining that more actively when I have access to my windows build
chain tools.

Windows Binary Distribution
– same situation as Unix but the enclaves are Windows variations
of NT, XP, Vista and all the different compiler/object code
incarnations. Less common ground to work with. The CLR
is yet another windows binary context to be considered.

Well, there’s plenty of common ground to work with, actually, and they
are strongly versioned too. The problem comes later…

*nix Source Distribution
– standards and common build-chains makes source code
and build processes compatible across wide variety of
systems. Complex packages composed of components from
multiple 3rd parties can be compiled and linked via the
same ‘common’ build-tool chain (because there is only one
per platform). I realize that autoconf is hideous
but it is one of the reasons for the broad source code
compatibility across *nix platforms.

Indeed. And more than that, it’s normally the distro package
maintainers that do any hard work for you.

Windows Source Distribution
– lack of prevalent compiler and common build-chains makes it
difficult to author and distribute a source code package
that builds and installs correctly in all the different
Windows environments

No. The build scripts themselves, are ok. The backing toolchains are
an issue.

There are plenty of prevalent compilers, but there are more options
than on *nix, and the options have more differences.

The core of ruby (i.e. excluding the ext/* stuff) builds just fine
with a normal configure and parse.c pre-built, and compiles just fine
under mingw, and all of the MS compilers. All of them.

– the lack of a common build-chain means that it can be
very difficult to mix-and-match 3rd party source distributions,
which is a much rarer problem in *nix environments.

Source distributions are fine. Again, it’s purely a tool chain and
version linkage issue. If a build chain was easy to setup, people
wouldn’t find it too hard to understand they need one of three
different binary versions if using pre-built packages, that is, one of
the three different C runtimes that are prevalent. That part of the
issue, I honestly believe, people could deal with. Also, it’s
borderline irrelevant for software that doesn’t pass around data
structures from one runtime to the next, without altering the data
structures ABI.

It seems like the nut to crack is the build-chain environment
in Windows. If the build-chain isn’t predictable then it is
going to be pretty hard to avoid an n*m amount of work to get
arbitrary collection of n-packages to work together in m different
Windows environments.

It is the build chain that we need to fix.

It’s not that many different environments. All of the MS compilers
(for most all of the ruby related software), are actually pretty
identical in terms of the build-time usage. That is, the same command
line works for all of the MS compilers.

MinGW is different, but it’s also important as it’s the only legal,
currently available compiler that’s compatible with VS 6 and the
MSVCRT (“current” runtime).

seems like that a Ruby/Rake combo that could be built with
an absolute minimum of external dependencies would be a great
tool for bootstrapping a full-featured build-chain.

Right.

Here is a really crazy idea. Instead of trying to construct
a build environment on each and every Windows box what if the
build environment was available via the Internet? So a small
tool would query the local system and then instruct a remote
system to build the appropriate dlls/applications for the local
machine. Lots of security implications since you would have
to trust the code that was being delivered by the remote
build tool but of course if you are downloading and installing
a build-tool environment you are already extending trust to
the provider of the build-tool.

I’ve been thinking about it, but more importantly, something like a
gem build server for building native gems is the first step to that
side of it. I’ll be working more on this idea in February, as my
company needs something better than my nasty scripts. This will ofc be
released when ready.

What we may want to do (and probably will) is to provide the necessary
pieces to service as a pre-compiled build environment which was built
under the same environment it targets. That is, three or four
different standard binary build sets, for each of the prevalent
compilers. When separated from the rest of the stdlib, and compressed,
this isn’t too huge either - I might even be able to front the
bandwidth if we can’t do it sanely on rubyforge. I still need to get
into those discussions with the rubygems team, and possibly much later
on Tom C… For the moment, I have much more work to do prior to
that.

Same idea could be done in the Unix context.

Absolutely, I have no intention of locking this to a single context at
all, or hardcoding like GnuWin32 does.

On Jan 3, 2008, at 1:55 PM, Luis L. wrote:

For me too, but every time I sent a message to get feedback, three
things happen:

A) don’t get any reply (quite common)
B) “move to a real OS” not so funny comments wasting part of my time
invested (free) on ruby.
C) another long thread like this with all the problems windows users
face, the C-lib (MSVCRT) and all that, all over again.

I’m not a Windows programmer and I didn’t really want to cause a
rehash of all the discussion, I’ve seen before. I was just trying to
get the ‘big picture’ to understand why Windows is so problematic in
this area.

Another attempt at a big picture summary:

*nix Binary Distribution
– limited to enclaves (Solaris, MacOSX, Linux) and has all
the same library version problems as with Windows but on any
particular platform there is more continuity between different
libc versions and only a single prevalent compiler avoiding
various object code linkage problems.

Windows Binary Distribution
– same situation as Unix but the enclaves are Windows variations
of NT, XP, Vista and all the different compiler/object code
incarnations. Less common ground to work with. The CLR
is yet another windows binary context to be considered.

*nix Source Distribution
– standards and common build-chains makes source code
and build processes compatible across wide variety of
systems. Complex packages composed of components from
multiple 3rd parties can be compiled and linked via the
same ‘common’ build-tool chain (because there is only one
per platform). I realize that autoconf is hideous
but it is one of the reasons for the broad source code
compatibility across *nix platforms.

Windows Source Distribution
– lack of prevalent compiler and common build-chains makes it
difficult to author and distribute a source code package
that builds and installs correctly in all the different
Windows environments

– the lack of a common build-chain means that it can be
very difficult to mix-and-match 3rd party source distributions,
which is a much rarer problem in *nix environments.

It seems like the nut to crack is the build-chain environment
in Windows. If the build-chain isn’t predictable then it is
going to be pretty hard to avoid an n*m amount of work to get
arbitrary collection of n-packages to work together in m different
Windows environments.

James T. wrote:

Personally, I think the important first stage is to release an
automated build-chain build-chain. Once we can automate the whole
stack, we can release it at several independent levels of compilation.

Sounds like the right place to focus developer energy. Also
seems like that a Ruby/Rake combo that could be built with
an absolute minimum of external dependencies would be a great
tool for bootstrapping a full-featured build-chain.

Here is a really crazy idea. Instead of trying to construct
a build environment on each and every Windows box what if the
build environment was available via the Internet? So a small
tool would query the local system and then instruct a remote
system to build the appropriate dlls/applications for the local
machine. Lots of security implications since you would have
to trust the code that was being delivered by the remote
build tool but of course if you are downloading and installing
a build-tool environment you are already extending trust to
the provider of the build-tool.

Same idea could be done in the Unix context.

Gary W.