Thursday, March 15, 2007

The ever-leaky abstractions of Microsoft

I was using Windows XP inside Parallels today and got a message from Windows that I had never seen before:



How on earth could you design an operating system that could detect there wasn't enough virtual memory, expand the virtual memory pool automatically, but would deny applications requests for virtual memory at the same time?

5 comments:

Anonymous said...

Well, I can see how an OS could detect that it didn't have enough VM (swap file) to grant an alloc(), return an error to the app, yet inform the (administratively privileged) user that it was going to grow swap dynamically.

However, the truly funny thing in your post is that the dialog says something about minimum space. Uh....didn't you just exceed the maximum.......?

;-)
R.

Brian Gilstrap said...

LOL! Excellent point.

Still, I contend that anyone in their right mind would not design an OS that would have enough information to (1) determine there wasn't enough VM, (2) be able to inform the user of this via a dialog (tons of coding there), (3) be able to remedy the situation, and (4) barf at programs looking for the remedy.

I clearly missed the best humor in this one. But Microsquish yet again missed the mark.

Anonymous said...

[Let me preface this for anyone who doesn't know me that I'm not a knee-jerk Windows apologist! Yes, I do development on Windows, but my first 8 years of paying work were on VMS and UNIX (well, "Ultrix" :-); I've owned 2 Macs and subscribe to Mac|Life (née MacAddict); and my server runs FreeBSD. I'm just trying to deal fairly with technological issues here!]

Well, what do you really think should happen in case (4)?

The remedy in case (3) is to allocate a reasonably large swath of contiguous disk space and stitch it into your set of swap files. This happens when there is (e.g.) a malloc() call in (4). This recovery act (3) probably also generates a Windows event that any Windows app (including the desktop) can see in its event loop and react to (2).

But our poor code in (4) might not be able to deal with that Windows event. That call to malloc() might a) be buried in some run-time library that has no notion of the Windows event stream or b) be in a console-mode application that similarly doesn't look at the Windows event stream.

And (at least in Visual C++ 2005, though I expect things are similar in other C++'s) malloc() only sets errno to ENOMEM ("out of memory"). Should MS gin up some other status code like ENOMEMYET that means "I'm working on getting you that memory. Just a second." Should malloc() really block until the OS has done a bunch of disk I/O?

My guess is that the memory subsystem is optimized in such a way that malloc() serves out of a pool and when that pool is saturated the callers get errors until the pool is grown. But that pool growth is managed by a separate process, so as to keep malloc() clean and fast.

I don't think Windows is doing this any worse than UNIX or Mach or ... Or do you know of an OS that actually deals with this better?

R.

Brian Gilstrap said...

On Unix, if the process runs out of heap space, it asks for more (a la sbrk in the old days). If the call to sbrk triggers a need for more memory at the OS level, that is done automatically and behind the scenes by the kernel. The address space of the process is grown by sbrk and malloc takes advantage of the new memory, returning allocated memory without failing the request.

This is what the default shoud be. If you have special memory needs and want to know if a request to get memory will take 'a long time', then an more specialized API is fine. But the default should preserve the abstraction of the memory model.

Anonymous said...

Thanks for posting! I really enjoyed the report. I've already bookmark

this article.