<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Comments on snippet: 'pthread sleeper fairness'</title>
    <description>Snipplr comments feed</description>
    <link>https://snipplr.com/</link>
    <lastBuildDate>Tue, 07 Apr 2026 09:40:34 +0000</lastBuildDate>
    <item>
      <title>deepsoul said on 14/Sep/2009</title>
      <link>https://snipplr.com/view/19655/pthread-sleeper-fairness</link>
      <description>&lt;p&gt;Surprise, surprise: I just tested my Linux box and obtained&#13;
&#13;
    SIGALRM. thread0: 4891, thread1: 412, switch:3&#13;
&#13;
as the most unfair of several runs with the unchanged program.  Setting EXIT_AFTER to 10 gives&#13;
&#13;
    SIGALRM. thread0: 9090, thread1: 43200, switch:15&#13;
&#13;
as the worst result.  With EXIT_AFTER=100 the worst case is:&#13;
&#13;
    SIGALRM. thread0: 184344, thread1: 338777, switch:63&#13;
&#13;
As one can see, the results become fairer as the test time and the number of task switches increases.&#13;
&#13;
Probable reason:  My kernel was compiled with the option `CONFIG_HZ_250=y`.  This means the interrupt timer frequency on which the scheduler depends is set to 250 Hz.  The kernel  config help calls this the compromise between 100Hz (for servers, few interrupts that would reduce overall performance) and 1000Hz (for desktops, fastest response to user events).  A low timer frequency gives infrequent task switches and therefore little "fairness".  Stock distribution kernels tend to be built with 1000Hz timer frequency,  hence the fair results you saw.  Look at `/boot/config-` to check.&#13;
&#13;
Thanks for the program!  I will test some other Linux boxes I have access to which run distribution kernels and post the results.&lt;/p&gt;</description>
      <pubDate>Mon, 14 Sep 2009 17:11:57 UTC</pubDate>
      <guid>https://snipplr.com/view/19655/pthread-sleeper-fairness</guid>
    </item>
    <item>
      <title>deepsoul said on 14/Sep/2009</title>
      <link>https://snipplr.com/view/19655/pthread-sleeper-fairness</link>
      <description>&lt;p&gt;That should have been: Look at `/boot/config-` to ckeck.&lt;/p&gt;</description>
      <pubDate>Mon, 14 Sep 2009 17:16:24 UTC</pubDate>
      <guid>https://snipplr.com/view/19655/pthread-sleeper-fairness</guid>
    </item>
    <item>
      <title>deepsoul said on 14/Sep/2009</title>
      <link>https://snipplr.com/view/19655/pthread-sleeper-fairness</link>
      <description>&lt;p&gt;Another try: Look at `/boot/config-`&lt;your kernel version&gt; to check.&#13;
Sorry this took so long.&lt;/p&gt;</description>
      <pubDate>Mon, 14 Sep 2009 17:19:11 UTC</pubDate>
      <guid>https://snipplr.com/view/19655/pthread-sleeper-fairness</guid>
    </item>
    <item>
      <title>keigoi said on 15/Sep/2009</title>
      <link>https://snipplr.com/view/19655/pthread-sleeper-fairness</link>
      <description>&lt;p&gt;Thanks for comments, deepsoul!&#13;
&#13;
Actually, this code simulates context switching in O'Caml programming language (native code, compiled with ocamlopt). I suppose other scripting languages like Ruby and Python might take similar scheduling mechanism. Sorry for incomplete explanation. I will revise it.&#13;
&#13;
After googlin' some words like `fairness pthread scheduling` etc,  I found this wikipedia page:&#13;
&#13;
&lt;a href="http://en.wikipedia.org/wiki/Completely_Fair_Scheduler"&gt;Completely Fair Scheduler&lt;/a&gt;&#13;
&#13;
according to this, the kernel which version is greater or equal to  2.6.23, which is supposed to be equipped with CFS,  will provide CPU time to each threads fairly, even if a thread waits  for long time because of locked mutex.&#13;
&#13;
I suppose that the reason of unfairness in other OSs is that the `fairness' of their scheduling algorithm  does not take into account sleeping threads.&#13;
&#13;
I found CONFIG_HZ in my linux (kernel 2.6.26, which is "fair"):&#13;
&lt;pre&gt;&#13;
# CONFIG_HZ_100 is not set&#13;
CONFIG_HZ_250=y&#13;
# CONFIG_HZ_300 is not set&#13;
# CONFIG_HZ_1000 is not set&#13;
CONFIG_HZ=250&#13;
&lt;/pre&gt;&#13;
hence I suppose that CONFIG_HZ parameter is irrelevant for fairness in this case.&lt;/p&gt;</description>
      <pubDate>Tue, 15 Sep 2009 00:51:36 UTC</pubDate>
      <guid>https://snipplr.com/view/19655/pthread-sleeper-fairness</guid>
    </item>
    <item>
      <title>deepsoul said on 17/Sep/2009</title>
      <link>https://snipplr.com/view/19655/pthread-sleeper-fairness</link>
      <description>&lt;p&gt;Oh well, there goes that theory.  It would have been crass if a factor of 4 in the timer frequency had caused a factor of 100 in the timescale at which scheduling becomes fair, so this puts the world right in a way.&#13;
&#13;
**But** I have in the mean time tested three other linux systems.  All (including the one above) run kernels with versions 2.6.23 or later, so use the CFS scheduler.  Except for one, they gave the same results as the first machine above - fairness only at minute timescales.  This one is distinguished by being a single-core CPU.  The others are dual-core or (one of them) an Intel Atom which is treated as dual-core due to hyperthreading.  This is the cause of the discrepancy:  When I restrict the program to run on one core only (with the command `taskset 0x1`), the scheduling is perfectly fair on all machines at the default test time of one second!  Your program seemed to run almost exclusively on one CPU anyway, but I saw this only visually from xosview, so it may simply give inaccurate results on multi-cores.&#13;
&#13;
Finally, I tested a windoze box (compiled and run via Cygwin).  It behaved not quite as fairly as the single-core linux results, but decently.  A typical result:&#13;
&#13;
    SIGALRM. thread0: 1970, thread1: 2543, switch:6&lt;/p&gt;</description>
      <pubDate>Thu, 17 Sep 2009 14:26:17 UTC</pubDate>
      <guid>https://snipplr.com/view/19655/pthread-sleeper-fairness</guid>
    </item>
  </channel>
</rss>
