This is the mail archive of the cygwin mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: uptime not reporting CPU usage on Windows 7 (Possibly only when running in VMWare)

On 1 January 2011 16:51, Andrew DeFaria wrote:
> On 01/01/2011 09:08 AM, Andy Koppe wrote:
>>> Why not simply make it better or at least make it something
>>> that nobody could mistake for simply a machine is not busy?
>> Because it's not that simple. Removing /proc/loadavg would be nice,
>> but would break any stuff that expects it to be there.
> I didn't suggest this.

I think removing /proc/loadavg would be a cleaner solution than
-1/-1/-1 though, so I tried that. Unfortunately it makes 'uptime' and
'top' fail, so that's why the dummy implementation is there.

>> ÂChanging to
>> out-of-range values would also make some sense, but would break stuff
>> that depends on the values being in range.
> Since 0's "in range" it's not really a leap to assume that apps would view
> it as "this machine's not busy at all". I mean as a human I "get it" when I
> see 0 consistently when I *know*, by looking at other measures such as
> TaskMgr/Process Explorer, that the machine is indeed busy. I can, as a
> human, make that judgment call that 0's in this case must mean that Cygwin's
> uptime isn't reporting load average correctly possibly due to some
> challenges on Windows.

Thanks for acknowledging that.

> But to a script or other app this would probably not
> be detected and instead the script or app will simply assume that 0 means
> not busy.

Fair point.

> IMHO it would be far better to report -1 and have the script or
> app fail to call attention to the fact that 0 does not mean not busy in the
> Cygwin/Windows case.

I agree with the aim, but depending on how the number is parsed, the
failure might be non-obvious, for example by causing a wraparound to a
big positive number. And in any case it would result in "Updating
Cygwin broke my script" and "Negative load? Doesn't make sense!"
complaints. Hence I'm not convinced that's a change worth making.

>>>> ÂAlso, load information is in
>>>> fact already available from /proc/stat. Here's how to interpret it:
>>>> That's what 'top' uses
>>>> as well.
>>> Great, well then couldn't uptime (and top, etc.) use that?
>> As I'm sure you've read in the description, the numbers in the file
>> are aggregates since system startup. (On Cygwin, they're in
>> microseconds.)
>> A percentage calculated directly from that obviously isn't what you
>> asked for, and it would become near-constant pretty quickly, so it
>> would be no use for seeing how your system is currently doing.
>> 'top' obtains the current load by calculating the differences between
>> successive /proc/stat readings. That's why it pauses for a second
>> before starting to display data. Pausing like that is not an option
>> for the /proc/loadavg driver, because programs expect reads from that
>> file to return immediately.
>> Here's an idea though that might provide a sufficiently useful
>> /proc/loadavg without having a background process polling the
>> performance counters. Read the counters whenever /proc/loadavg or
>> /proc/stat is read, and keep the readings for the last fifteen minutes
>> in a buffer, while discarding readings that are needlessly close
>> together. Use those readings to calculate the 1min/5min/15min
>> averages.
>> Obviously the averages will be rubbish if /proc/loadavg hasn't been
>> read for a while, but the more often it's read, the better it gets. In
>> particular, if you just left 'top' running, you'd get increasingly
>> precise values.

Scratch that, I lost sight of the fact that /proc/loadavg is meant to
average the number of active processes rather than utilization

The current number of active processes can in fact be worked out from
the results of the undocumented
function. That's used in the implementation of /proc/<pid>/status, and
it would involve iterating over all processes and threads, counting
the number of processes that have running or ready threads.

Of course that would still yield the current state only, so a
background process or the approximate aproach outlined above would
still be needed to obtain averages.


Problem reports:
Unsubscribe info:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]