-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathpth.3
2337 lines (2337 loc) · 129 KB
/
pth.3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
.\" Automatically generated by Pod::Man 2.1801 (Pod::Simple 3.05)
.\"
.\" Standard preamble:
.\" ========================================================================
.de Sp \" Vertical space (when we can't use .PP)
.if t .sp .5v
.if n .sp
..
.de Vb \" Begin verbatim text
.ft CW
.nf
.ne \\$1
..
.de Ve \" End verbatim text
.ft R
.fi
..
.\" Set up some character translations and predefined strings. \*(-- will
.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
.\" double quote, and \*(R" will give a right double quote. \*(C+ will
.\" give a nicer C++. Capital omega is used to do unbreakable dashes and
.\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff,
.\" nothing in troff, for use with C<>.
.tr \(*W-
.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
.ie n \{\
. ds -- \(*W-
. ds PI pi
. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch
. ds L" ""
. ds R" ""
. ds C` ""
. ds C' ""
'br\}
.el\{\
. ds -- \|\(em\|
. ds PI \(*p
. ds L" ``
. ds R" ''
'br\}
.\"
.\" Escape single quotes in literal strings from groff's Unicode transform.
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\"
.\" If the F register is turned on, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
.\" entries marked with X<> in POD. Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
.ie \nF \{\
. de IX
. tm Index:\\$1\t\\n%\t"\\$2"
..
. nr % 0
. rr F
.\}
.el \{\
. de IX
..
.\}
.\"
.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2).
.\" Fear. Run. Save yourself. No user-serviceable parts.
. \" fudge factors for nroff and troff
.if n \{\
. ds #H 0
. ds #V .8m
. ds #F .3m
. ds #[ \f1
. ds #] \fP
.\}
.if t \{\
. ds #H ((1u-(\\\\n(.fu%2u))*.13m)
. ds #V .6m
. ds #F 0
. ds #[ \&
. ds #] \&
.\}
. \" simple accents for nroff and troff
.if n \{\
. ds ' \&
. ds ` \&
. ds ^ \&
. ds , \&
. ds ~ ~
. ds /
.\}
.if t \{\
. ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u"
. ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u'
. ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u'
. ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u'
. ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u'
. ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u'
.\}
. \" troff and (daisy-wheel) nroff accents
.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V'
.ds 8 \h'\*(#H'\(*b\h'-\*(#H'
.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#]
.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H'
.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u'
.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#]
.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#]
.ds ae a\h'-(\w'a'u*4/10)'e
.ds Ae A\h'-(\w'A'u*4/10)'E
. \" corrections for vroff
.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u'
.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u'
. \" for low resolution devices (crt and lpr)
.if \n(.H>23 .if \n(.V>19 \
\{\
. ds : e
. ds 8 ss
. ds o a
. ds d- d\h'-1'\(ga
. ds D- D\h'-1'\(hy
. ds th \o'bp'
. ds Th \o'LP'
. ds ae ae
. ds Ae AE
.\}
.rm #[ #] #H #V #F C
.\" ========================================================================
.\"
.IX Title ".::pth 3"
.TH .::pth 3 "pthsem 2.0.8" "2.0.8" "pthsem Portable Threads"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
.nh
.SH "NAME"
\&\fBpthsem\fR \- \s-1GNU\s0 Portable Threads
.SH "VERSION"
.IX Header "VERSION"
pthsem \s-12.0.8\s0
based on \s-1GNU\s0 Pth
.SH "SYNOPSIS"
.IX Header "SYNOPSIS"
.IP "\fBGlobal Library Management\fR" 4
.IX Item "Global Library Management"
pth_init,
pth_kill,
pth_ctrl,
pth_version.
.IP "\fBThread Attribute Handling\fR" 4
.IX Item "Thread Attribute Handling"
pth_attr_of,
pth_attr_new,
pth_attr_init,
pth_attr_set,
pth_attr_get,
pth_attr_destroy.
.IP "\fBThread Control\fR" 4
.IX Item "Thread Control"
pth_spawn,
pth_once,
pth_self,
pth_suspend,
pth_resume,
pth_yield,
pth_nap,
pth_wait,
pth_cancel,
pth_abort,
pth_raise,
pth_join,
pth_exit.
.IP "\fBUtilities\fR" 4
.IX Item "Utilities"
pth_fdmode,
pth_time,
pth_timeout,
pth_int_time,
pth_sfiodisc.
.IP "\fBCancellation Management\fR" 4
.IX Item "Cancellation Management"
pth_cancel_point,
pth_cancel_state.
.IP "\fBEvent Handling\fR" 4
.IX Item "Event Handling"
pth_event,
pth_event_typeof,
pth_event_extract,
pth_event_concat,
pth_event_isolate,
pth_event_walk,
pth_event_status,
pth_event_free.
.IP "\fBKey-Based Storage\fR" 4
.IX Item "Key-Based Storage"
pth_key_create,
pth_key_delete,
pth_key_setdata,
pth_key_getdata.
.IP "\fBMessage Port Communication\fR" 4
.IX Item "Message Port Communication"
pth_msgport_create,
pth_msgport_destroy,
pth_msgport_find,
pth_msgport_pending,
pth_msgport_put,
pth_msgport_get,
pth_msgport_reply.
.IP "\fBThread Cleanups\fR" 4
.IX Item "Thread Cleanups"
pth_cleanup_push,
pth_cleanup_pop.
.IP "\fBProcess Forking\fR" 4
.IX Item "Process Forking"
pth_atfork_push,
pth_atfork_pop,
pth_fork.
.IP "\fBSynchronization\fR" 4
.IX Item "Synchronization"
pth_mutex_init,
pth_mutex_acquire,
pth_mutex_release,
pth_rwlock_init,
pth_rwlock_acquire,
pth_rwlock_release,
pth_cond_init,
pth_cond_await,
pth_cond_notify,
pth_barrier_init,
pth_barrier_reach.
.IP "\fBSemaphore support\fR" 4
.IX Item "Semaphore support"
pth_sem_init,
pth_sem_dec,
pth_sem_dec_value,
pth_sem_inc,
pth_sem_inc_value,
pth_sem_set_value,
pth_sem_get_value.
.IP "\fBUser-Space Context\fR" 4
.IX Item "User-Space Context"
pth_uctx_create,
pth_uctx_make,
pth_uctx_switch,
pth_uctx_destroy.
.IP "\fBGeneralized \s-1POSIX\s0 Replacement \s-1API\s0\fR" 4
.IX Item "Generalized POSIX Replacement API"
pth_sigwait_ev,
pth_accept_ev,
pth_connect_ev,
pth_select_ev,
pth_poll_ev,
pth_read_ev,
pth_readv_ev,
pth_write_ev,
pth_writev_ev,
pth_recv_ev,
pth_recvfrom_ev,
pth_send_ev,
pth_sendto_ev.
.IP "\fBStandard \s-1POSIX\s0 Replacement \s-1API\s0\fR" 4
.IX Item "Standard POSIX Replacement API"
pth_nanosleep,
pth_usleep,
pth_sleep,
pth_waitpid,
pth_system,
pth_sigmask,
pth_sigwait,
pth_accept,
pth_connect,
pth_select,
pth_pselect,
pth_poll,
pth_read,
pth_readv,
pth_write,
pth_writev,
pth_pread,
pth_pwrite,
pth_recv,
pth_recvfrom,
pth_send,
pth_sendto.
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
.Vb 5
\& _\|_\|_\|_ _ _
\& | _ \e| |_| |_\|_
\& | |_) | _\|_| \*(Aq_ \e \`\`Only those who attempt
\& | _\|_/| |_| | | | the absurd can achieve
\& |_| \e_\|_|_| |_| the impossible.\*(Aq\*(Aq
.Ve
.PP
\&\fBPth\fR is a very portable \s-1POSIX/ANSI\-C\s0 based library for Unix platforms which
provides non-preemptive priority-based scheduling for multiple threads of
execution (aka `multithreading') inside event-driven applications. All threads
run in the same address space of the application process, but each thread has
its own individual program counter, run-time stack, signal mask and \f(CW\*(C`errno\*(C'\fR
variable.
.PP
The thread scheduling itself is done in a cooperative way, i.e., the threads
are managed and dispatched by a priority\- and event-driven non-preemptive
scheduler. The intention is that this way both better portability and run-time
performance is achieved than with preemptive scheduling. The event facility
allows threads to wait until various types of internal and external events
occur, including pending I/O on file descriptors, asynchronous signals,
elapsed timers, pending I/O on message ports, thread and process termination,
and even results of customized callback functions.
.PP
\&\fBPth\fR also provides an optional emulation \s-1API\s0 for \s-1POSIX\s0.1c threads
(`Pthreads') which can be used for backward compatibility to existing
multithreaded applications. See \fBPth\fR's \fIpthread\fR\|(3) manual page for
details.
.SS "Threading Background"
.IX Subsection "Threading Background"
When programming event-driven applications, usually servers, lots of
regular jobs and one-shot requests have to be processed in parallel.
To efficiently simulate this parallel processing on uniprocessor
machines, we use `multitasking' \*(-- that is, we have the application
ask the operating system to spawn multiple instances of itself. On
Unix, typically the kernel implements multitasking in a preemptive and
priority-based way through heavy-weight processes spawned with \fIfork\fR\|(2).
These processes usually do \fInot\fR share a common address space. Instead
they are clearly separated from each other, and are created by direct
cloning a process address space (although modern kernels use memory
segment mapping and copy-on-write semantics to avoid unnecessary copying
of physical memory).
.PP
The drawbacks are obvious: Sharing data between the processes is
complicated, and can usually only be done efficiently through shared
memory (but which itself is not very portable). Synchronization is
complicated because of the preemptive nature of the Unix scheduler
(one has to use \fIatomic\fR locks, etc). The machine's resources can be
exhausted very quickly when the server application has to serve too many
long-running requests (heavy-weight processes cost memory). And when
each request spawns a sub-process to handle it, the server performance
and responsiveness is horrible (heavy-weight processes cost time to
spawn). Finally, the server application doesn't scale very well with the
load because of these resource problems. In practice, lots of tricks
are usually used to overcome these problems \- ranging from pre-forked
sub-process pools to semi-serialized processing, etc.
.PP
One of the most elegant ways to solve these resource\- and data-sharing
problems is to have multiple \fIlight-weight\fR threads of execution
inside a single (heavy-weight) process, i.e., to use \fImultithreading\fR.
Those \fIthreads\fR usually improve responsiveness and performance of the
application, often improve and simplify the internal program structure,
and most important, require less system resources than heavy-weight
processes. Threads are neither the optimal run-time facility for all
types of applications, nor can all applications benefit from them. But
at least event-driven server applications usually benefit greatly from
using threads.
.SS "The World of Threading"
.IX Subsection "The World of Threading"
Even though lots of documents exists which describe and define the world
of threading, to understand \fBPth\fR, you need only basic knowledge about
threading. The following definitions of thread-related terms should at
least help you understand thread programming enough to allow you to use
\&\fBPth\fR.
.IP "\fBo\fR \fBprocess\fR vs. \fBthread\fR" 2
.IX Item "o process vs. thread"
A process on Unix systems consists of at least the following fundamental
ingredients: \fIvirtual memory table\fR, \fIprogram code\fR, \fIprogram
counter\fR, \fIheap memory\fR, \fIstack memory\fR, \fIstack pointer\fR, \fIfile
descriptor set\fR, \fIsignal table\fR. On every process switch, the kernel
saves and restores these ingredients for the individual processes. On
the other hand, a thread consists of only a private program counter,
stack memory, stack pointer and signal table. All other ingredients, in
particular the virtual memory, it shares with the other threads of the
same process.
.IP "\fBo\fR \fBkernel-space\fR vs. \fBuser-space\fR threading" 2
.IX Item "o kernel-space vs. user-space threading"
Threads on a Unix platform traditionally can be implemented either
inside kernel-space or user-space. When threads are implemented by the
kernel, the thread context switches are performed by the kernel without
the application's knowledge. Similarly, when threads are implemented in
user-space, the thread context switches are performed by an application
library, without the kernel's knowledge. There also are hybrid threading
approaches where, typically, a user-space library binds one or more
user-space threads to one or more kernel-space threads (there usually
called light-weight processes \- or in short LWPs).
.Sp
User-space threads are usually more portable and can perform faster
and cheaper context switches (for instance via \fIswapcontext\fR\|(2) or
\&\fIsetjmp\fR\|(3)/\fIlongjmp\fR\|(3)) than kernel based threads. On the other hand,
kernel-space threads can take advantage of multiprocessor machines and
don't have any inherent I/O blocking problems. Kernel-space threads are
usually scheduled in preemptive way side-by-side with the underlying
processes. User-space threads on the other hand use either preemptive or
non-preemptive scheduling.
.IP "\fBo\fR \fBpreemptive\fR vs. \fBnon-preemptive\fR thread scheduling" 2
.IX Item "o preemptive vs. non-preemptive thread scheduling"
In preemptive scheduling, the scheduler lets a thread execute until a
blocking situation occurs (usually a function call which would block)
or the assigned timeslice elapses. Then it detracts control from the
thread without a chance for the thread to object. This is usually
realized by interrupting the thread through a hardware interrupt
signal (for kernel-space threads) or a software interrupt signal (for
user-space threads), like \f(CW\*(C`SIGALRM\*(C'\fR or \f(CW\*(C`SIGVTALRM\*(C'\fR. In non-preemptive
scheduling, once a thread received control from the scheduler it keeps
it until either a blocking situation occurs (again a function call which
would block and instead switches back to the scheduler) or the thread
explicitly yields control back to the scheduler in a cooperative way.
.IP "\fBo\fR \fBconcurrency\fR vs. \fBparallelism\fR" 2
.IX Item "o concurrency vs. parallelism"
Concurrency exists when at least two threads are \fIin progress\fR at the
same time. Parallelism arises when at least two threads are \fIexecuting\fR
simultaneously. Real parallelism can be only achieved on multiprocessor
machines, of course. But one also usually speaks of parallelism or
\&\fIhigh concurrency\fR in the context of preemptive thread scheduling
and of \fIlow concurrency\fR in the context of non-preemptive thread
scheduling.
.IP "\fBo\fR \fBresponsiveness\fR" 2
.IX Item "o responsiveness"
The responsiveness of a system can be described by the user visible
delay until the system responses to an external request. When this delay
is small enough and the user doesn't recognize a noticeable delay,
the responsiveness of the system is considered good. When the user
recognizes or is even annoyed by the delay, the responsiveness of the
system is considered bad.
.IP "\fBo\fR \fBreentrant\fR, \fBthread-safe\fR and \fBasynchronous-safe\fR functions" 2
.IX Item "o reentrant, thread-safe and asynchronous-safe functions"
A reentrant function is one that behaves correctly if it is called
simultaneously by several threads and then also executes simultaneously.
Functions that access global state, such as memory or files, of course,
need to be carefully designed in order to be reentrant. Two traditional
approaches to solve these problems are caller-supplied states and
thread-specific data.
.Sp
Thread-safety is the avoidance of \fIdata races\fR, i.e., situations
in which data is set to either correct or incorrect value depending
upon the (unpredictable) order in which multiple threads access and
modify the data. So a function is thread-safe when it still behaves
semantically correct when called simultaneously by several threads (it
is not required that the functions also execute simultaneously). The
traditional approach to achieve thread-safety is to wrap a function body
with an internal mutual exclusion lock (aka `mutex'). As you should
recognize, reentrant is a stronger attribute than thread-safe, because
it is harder to achieve and results especially in no run-time contention
between threads. So, a reentrant function is always thread-safe, but not
vice versa.
.Sp
Additionally there is a related attribute for functions named
asynchronous-safe, which comes into play in conjunction with signal
handlers. This is very related to the problem of reentrant functions. An
asynchronous-safe function is one that can be called safe and without
side-effects from within a signal handler context. Usually very few
functions are of this type, because an application is very restricted in
what it can perform from within a signal handler (especially what system
functions it is allowed to call). The reason mainly is, because only a
few system functions are officially declared by \s-1POSIX\s0 as guaranteed to
be asynchronous-safe. Asynchronous-safe functions usually have to be
already reentrant.
.SS "User-Space Threads"
.IX Subsection "User-Space Threads"
User-space threads can be implemented in various way. The two
traditional approaches are:
.IP "\fB1.\fR" 3
.IX Item "1."
\&\fBMatrix-based explicit dispatching between small units of execution:\fR
.Sp
Here the global procedures of the application are split into small
execution units (each is required to not run for more than a few
milliseconds) and those units are implemented by separate functions.
Then a global matrix is defined which describes the execution (and
perhaps even dependency) order of these functions. The main server
procedure then just dispatches between these units by calling one
function after each other controlled by this matrix. The threads are
created by more than one jump-trail through this matrix and by switching
between these jump-trails controlled by corresponding occurred events.
.Sp
This approach gives the best possible performance, because one can
fine-tune the threads of execution by adjusting the matrix, and the
scheduling is done explicitly by the application itself. It is also very
portable, because the matrix is just an ordinary data structure, and
functions are a standard feature of \s-1ANSI\s0 C.
.Sp
The disadvantage of this approach is that it is complicated to write
large applications with this approach, because in those applications
one quickly gets hundreds(!) of execution units and the control flow
inside such an application is very hard to understand (because it is
interrupted by function borders and one always has to remember the
global dispatching matrix to follow it). Additionally, all threads
operate on the same execution stack. Although this saves memory, it is
often nasty, because one cannot switch between threads in the middle of
a function. Thus the scheduling borders are the function borders.
.IP "\fB2.\fR" 3
.IX Item "2."
\&\fBContext-based implicit scheduling between threads of execution:\fR
.Sp
Here the idea is that one programs the application as with forked
processes, i.e., one spawns a thread of execution and this runs from the
begin to the end without an interrupted control flow. But the control
flow can be still interrupted \- even in the middle of a function.
Actually in a preemptive way, similar to what the kernel does for the
heavy-weight processes, i.e., every few milliseconds the user-space
scheduler switches between the threads of execution. But the thread
itself doesn't recognize this and usually (except for synchronization
issues) doesn't have to care about this.
.Sp
The advantage of this approach is that it's very easy to program,
because the control flow and context of a thread directly follows
a procedure without forced interrupts through function borders.
Additionally, the programming is very similar to a traditional and well
understood \fIfork\fR\|(2) based approach.
.Sp
The disadvantage is that although the general performance is increased,
compared to using approaches based on heavy-weight processes, it is decreased
compared to the matrix-approach above. Because the implicit preemptive
scheduling does usually a lot more context switches (every user-space context
switch costs some overhead even when it is a lot cheaper than a kernel-level
context switch) than the explicit cooperative/non\-preemptive scheduling.
Finally, there is no really portable \s-1POSIX/ANSI\-C\s0 based way to implement
user-space preemptive threading. Either the platform already has threads,
or one has to hope that some semi-portable package exists for it. And
even those semi-portable packages usually have to deal with assembler
code and other nasty internals and are not easy to port to forthcoming
platforms.
.PP
So, in short: the matrix-dispatching approach is portable and fast, but
nasty to program. The thread scheduling approach is easy to program,
but suffers from synchronization and portability problems caused by its
preemptive nature.
.SS "The Compromise of Pth"
.IX Subsection "The Compromise of Pth"
But why not combine the good aspects of both approaches while avoiding
their bad aspects? That's the goal of \fBPth\fR. \fBPth\fR implements
easy-to-program threads of execution, but avoids the problems of
preemptive scheduling by using non-preemptive scheduling instead.
.PP
This sounds like, and is, a useful approach. Nevertheless, one has to
keep the implications of non-preemptive thread scheduling in mind when
working with \fBPth\fR. The following list summarizes a few essential
points:
.IP "\fBo\fR" 2
.IX Item "o"
\&\fBPth provides maximum portability, but \s-1NOT\s0 the fanciest features\fR.
.Sp
This is, because it uses a nifty and portable \s-1POSIX/ANSI\-C\s0 approach for
thread creation (and this way doesn't require any platform dependent
assembler hacks) and schedules the threads in non-preemptive way (which
doesn't require unportable facilities like \f(CW\*(C`SIGVTALRM\*(C'\fR). On the other
hand, this way not all fancy threading features can be implemented.
Nevertheless the available facilities are enough to provide a robust and
full-featured threading system.
.IP "\fBo\fR" 2
.IX Item "o"
\&\fBPth increases the responsiveness and concurrency of an event-driven
application, but \s-1NOT\s0 the concurrency of number-crunching applications\fR.
.Sp
The reason is the non-preemptive scheduling. Number-crunching
applications usually require preemptive scheduling to achieve
concurrency because of their long \s-1CPU\s0 bursts. For them, non-preemptive
scheduling (even together with explicit yielding) provides only the old
concept of `coroutines'. On the other hand, event driven applications
benefit greatly from non-preemptive scheduling. They have only short
\&\s-1CPU\s0 bursts and lots of events to wait on, and this way run faster under
non-preemptive scheduling because no unnecessary context switching
occurs, as it is the case for preemptive scheduling. That's why \fBPth\fR
is mainly intended for server type applications, although there is no
technical restriction.
.IP "\fBo\fR" 2
.IX Item "o"
\&\fBPth requires thread-safe functions, but \s-1NOT\s0 reentrant functions\fR.
.Sp
This nice fact exists again because of the nature of non-preemptive
scheduling, where a function isn't interrupted and this way cannot be
reentered before it returned. This is a great portability benefit,
because thread-safety can be achieved more easily than reentrance
possibility. Especially this means that under \fBPth\fR more existing
third-party libraries can be used without side-effects than it's the case
for other threading systems.
.IP "\fBo\fR" 2
.IX Item "o"
\&\fBPth doesn't require any kernel support, but can \s-1NOT\s0
benefit from multiprocessor machines\fR.
.Sp
This means that \fBPth\fR runs on almost all Unix kernels, because the
kernel does not need to be aware of the \fBPth\fR threads (because they
are implemented entirely in user-space). On the other hand, it cannot
benefit from the existence of multiprocessors, because for this, kernel
support would be needed. In practice, this is no problem, because
multiprocessor systems are rare, and portability is almost more
important than highest concurrency.
.SS "The life cycle of a thread"
.IX Subsection "The life cycle of a thread"
To understand the \fBPth\fR Application Programming Interface (\s-1API\s0), it
helps to first understand the life cycle of a thread in the \fBPth\fR
threading system. It can be illustrated with the following directed
graph:
.PP
.Vb 10
\& NEW
\& |
\& V
\& +\-\-\-> READY \-\-\-+
\& | ^ |
\& | | V
\& WAITING <\-\-+\-\- RUNNING
\& |
\& : V
\& SUSPENDED DEAD
.Ve
.PP
When a new thread is created, it is moved into the \fB\s-1NEW\s0\fR queue of the
scheduler. On the next dispatching for this thread, the scheduler picks
it up from there and moves it to the \fB\s-1READY\s0\fR queue. This is a queue
containing all threads which want to perform a \s-1CPU\s0 burst. There they are
queued in priority order. On each dispatching step, the scheduler always
removes the thread with the highest priority only. It then increases the
priority of all remaining threads by 1, to prevent them from `starving'.
.PP
The thread which was removed from the \fB\s-1READY\s0\fR queue is the new
\&\fB\s-1RUNNING\s0\fR thread (there is always just one \fB\s-1RUNNING\s0\fR thread, of
course). The \fB\s-1RUNNING\s0\fR thread is assigned execution control. After
this thread yields execution (either explicitly by yielding execution
or implicitly by calling a function which would block) there are three
possibilities: Either it has terminated, then it is moved to the \fB\s-1DEAD\s0\fR
queue, or it has events on which it wants to wait, then it is moved into
the \fB\s-1WAITING\s0\fR queue. Else it is assumed it wants to perform more \s-1CPU\s0
bursts and immediately enters the \fB\s-1READY\s0\fR queue again.
.PP
Before the next thread is taken out of the \fB\s-1READY\s0\fR queue, the
\&\fB\s-1WAITING\s0\fR queue is checked for pending events. If one or more events
occurred, the threads that are waiting on them are immediately moved to
the \fB\s-1READY\s0\fR queue.
.PP
The purpose of the \fB\s-1NEW\s0\fR queue has to do with the fact that in \fBPth\fR
a thread never directly switches to another thread. A thread always
yields execution to the scheduler and the scheduler dispatches to the
next thread. So a freshly spawned thread has to be kept somewhere until
the scheduler gets a chance to pick it up for scheduling. That is
what the \fB\s-1NEW\s0\fR queue is for.
.PP
The purpose of the \fB\s-1DEAD\s0\fR queue is to support thread joining. When a
thread is marked to be unjoinable, it is directly kicked out of the
system after it terminated. But when it is joinable, it enters the
\&\fB\s-1DEAD\s0\fR queue. There it remains until another thread joins it.
.PP
Finally, there is a special separated queue named \fB\s-1SUSPENDED\s0\fR, to where
threads can be manually moved from the \fB\s-1NEW\s0\fR, \fB\s-1READY\s0\fR or \fB\s-1WAITING\s0\fR
queues by the application. The purpose of this special queue is to
temporarily absorb suspended threads until they are again resumed by
the application. Suspended threads do not cost scheduling or event
handling resources, because they are temporarily completely out of the
scheduler's scope. If a thread is resumed, it is moved back to the queue
from where it originally came and this way again enters the schedulers
scope.
.SH "APPLICATION PROGRAMMING INTERFACE (API)"
.IX Header "APPLICATION PROGRAMMING INTERFACE (API)"
In the following the \fBPth\fR \fIApplication Programming Interface\fR (\s-1API\s0)
is discussed in detail. With the knowledge given above, it should now
be easy to understand how to program threads with this \s-1API\s0. In good
Unix tradition, \fBPth\fR functions use special return values (\f(CW\*(C`NULL\*(C'\fR
in pointer context, \f(CW\*(C`FALSE\*(C'\fR in boolean context and \f(CW\*(C`\-1\*(C'\fR in integer
context) to indicate an error condition and set (or pass through) the
\&\f(CW\*(C`errno\*(C'\fR system variable to pass more details about the error to the
caller.
.SS "Global Library Management"
.IX Subsection "Global Library Management"
The following functions act on the library as a whole. They are used to
initialize and shutdown the scheduler and fetch information from it.
.IP "int \fBpth_init\fR(void);" 4
.IX Item "int pth_init(void);"
This initializes the \fBPth\fR library. It has to be the first \fBPth\fR \s-1API\s0
function call in an application, and is mandatory. It's usually done at
the begin of the \fImain()\fR function of the application. This implicitly
spawns the internal scheduler thread and transforms the single execution
unit of the current process into a thread (the `main' thread). It
returns \f(CW\*(C`TRUE\*(C'\fR on success and \f(CW\*(C`FALSE\*(C'\fR on error.
.IP "int \fBpth_kill\fR(void);" 4
.IX Item "int pth_kill(void);"
This kills the \fBPth\fR library. It should be the last \fBPth\fR \s-1API\s0 function call
in an application, but is not really required. It's usually done at the end of
the main function of the application. At least, it has to be called from within
the main thread. It implicitly kills all threads and transforms back the
calling thread into the single execution unit of the underlying process. The
usual way to terminate a \fBPth\fR application is either a simple
`\f(CW\*(C`pth_exit(0);\*(C'\fR' in the main thread (which waits for all other threads to
terminate, kills the threading system and then terminates the process) or a
`\f(CW\*(C`pth_kill(); exit(0)\*(C'\fR' (which immediately kills the threading system and
terminates the process). The \fIpth_kill()\fR return immediately with a return
code of \f(CW\*(C`FALSE\*(C'\fR if it is not called from within the main thread. Else it
kills the threading system and returns \f(CW\*(C`TRUE\*(C'\fR.
.IP "long \fBpth_ctrl\fR(unsigned long \fIquery\fR, ...);" 4
.IX Item "long pth_ctrl(unsigned long query, ...);"
This is a generalized query/control function for the \fBPth\fR library. The
argument \fIquery\fR is a bitmask formed out of one or more \f(CW\*(C`PTH_CTRL_\*(C'\fR\fI\s-1XXXX\s0\fR
queries. Currently the following queries are supported:
.RS 4
.ie n .IP """PTH_CTRL_GETTHREADS""" 4
.el .IP "\f(CWPTH_CTRL_GETTHREADS\fR" 4
.IX Item "PTH_CTRL_GETTHREADS"
This returns the total number of threads currently in existence. This query
actually is formed out of the combination of queries for threads in a
particular state, i.e., the \f(CW\*(C`PTH_CTRL_GETTHREADS\*(C'\fR query is equal to the
OR-combination of all the following specialized queries:
.Sp
\&\f(CW\*(C`PTH_CTRL_GETTHREADS_NEW\*(C'\fR for the number of threads in the
new queue (threads created via \fIpth_spawn\fR\|(3) but still not
scheduled once), \f(CW\*(C`PTH_CTRL_GETTHREADS_READY\*(C'\fR for the number of
threads in the ready queue (threads who want to do \s-1CPU\s0 bursts),
\&\f(CW\*(C`PTH_CTRL_GETTHREADS_RUNNING\*(C'\fR for the number of running threads
(always just one thread!), \f(CW\*(C`PTH_CTRL_GETTHREADS_WAITING\*(C'\fR for
the number of threads in the waiting queue (threads waiting for
events), \f(CW\*(C`PTH_CTRL_GETTHREADS_SUSPENDED\*(C'\fR for the number of
threads in the suspended queue (threads waiting to be resumed) and
\&\f(CW\*(C`PTH_CTRL_GETTHREADS_DEAD\*(C'\fR for the number of threads in the new queue
(terminated threads waiting for a join).
.ie n .IP """PTH_CTRL_GETAVLOAD""" 4
.el .IP "\f(CWPTH_CTRL_GETAVLOAD\fR" 4
.IX Item "PTH_CTRL_GETAVLOAD"
This requires a second argument of type `\f(CW\*(C`float *\*(C'\fR' (pointer to a floating
point variable). It stores a floating point value describing the exponential
averaged load of the scheduler in this variable. The load is a function from
the number of threads in the ready queue of the schedulers dispatching unit.
So a load around 1.0 means there is only one ready thread (the standard
situation when the application has no high load). A higher load value means
there a more threads ready who want to do \s-1CPU\s0 bursts. The average load value
updates once per second only. The return value for this query is always 0.
.ie n .IP """PTH_CTRL_GETPRIO""" 4
.el .IP "\f(CWPTH_CTRL_GETPRIO\fR" 4
.IX Item "PTH_CTRL_GETPRIO"
This requires a second argument of type `\f(CW\*(C`pth_t\*(C'\fR' which identifies a
thread. It returns the priority (ranging from \f(CW\*(C`PTH_PRIO_MIN\*(C'\fR to
\&\f(CW\*(C`PTH_PRIO_MAX\*(C'\fR) of the given thread.
.ie n .IP """PTH_CTRL_GETNAME""" 4
.el .IP "\f(CWPTH_CTRL_GETNAME\fR" 4
.IX Item "PTH_CTRL_GETNAME"
This requires a second argument of type `\f(CW\*(C`pth_t\*(C'\fR' which identifies a
thread. It returns the name of the given thread, i.e., the return value of
\&\fIpth_ctrl\fR\|(3) should be casted to a `\f(CW\*(C`char *\*(C'\fR'.
.ie n .IP """PTH_CTRL_DUMPSTATE""" 4
.el .IP "\f(CWPTH_CTRL_DUMPSTATE\fR" 4
.IX Item "PTH_CTRL_DUMPSTATE"
This requires a second argument of type `\f(CW\*(C`FILE *\*(C'\fR' to which a summary
of the internal \fBPth\fR library state is written to. The main information
which is currently written out is the current state of the thread pool.
.ie n .IP """PTH_CTRL_FAVOURNEW""" 4
.el .IP "\f(CWPTH_CTRL_FAVOURNEW\fR" 4
.IX Item "PTH_CTRL_FAVOURNEW"
This requires a second argument of type `\f(CW\*(C`int\*(C'\fR' which specified whether
the \fB\s-1GNU\s0 Pth\fR scheduler favours new threads on startup, i.e., whether
they are moved from the new queue to the top (argument is \f(CW\*(C`TRUE\*(C'\fR) or
middle (argument is \f(CW\*(C`FALSE\*(C'\fR) of the ready queue. The default is to
favour new threads to make sure they do not starve already at startup,
although this slightly violates the strict priority based scheduling.
.RE
.RS 4
.Sp
The function returns \f(CW\*(C`\-1\*(C'\fR on error.
.RE
.IP "long \fBpth_version\fR(void);" 4
.IX Item "long pth_version(void);"
This function returns a hex-value `0x\fIV\fR\fI\s-1RR\s0\fR\fIT\fR\fI\s-1LL\s0\fR' which describes the
current \fBPth\fR library version. \fIV\fR is the version, \fI\s-1RR\s0\fR the revisions,
\&\fI\s-1LL\s0\fR the level and \fIT\fR the type of the level (alphalevel=0, betalevel=1,
patchlevel=2, etc). For instance \fBPth\fR version 1.0b1 is encoded as 0x100101.
The reason for this unusual mapping is that this way the version number is
steadily \fIincreasing\fR. The same value is also available under compile time as
\&\f(CW\*(C`PTH_VERSION\*(C'\fR.
.SS "Thread Attribute Handling"
.IX Subsection "Thread Attribute Handling"
Attribute objects are used in \fBPth\fR for two things: First stand\-alone/unbound
attribute objects are used to store attributes for to be spawned threads.
Bounded attribute objects are used to modify attributes of already existing
threads. The following attribute fields exists in attribute objects:
.ie n .IP """PTH_ATTR_PRIO"" (read-write) [""int""]" 4
.el .IP "\f(CWPTH_ATTR_PRIO\fR (read-write) [\f(CWint\fR]" 4
.IX Item "PTH_ATTR_PRIO (read-write) [int]"
Thread Priority between \f(CW\*(C`PTH_PRIO_MIN\*(C'\fR and \f(CW\*(C`PTH_PRIO_MAX\*(C'\fR.
The default is \f(CW\*(C`PTH_PRIO_STD\*(C'\fR.
.ie n .IP """PTH_ATTR_NAME"" (read-write) [""char *""]" 4
.el .IP "\f(CWPTH_ATTR_NAME\fR (read-write) [\f(CWchar *\fR]" 4
.IX Item "PTH_ATTR_NAME (read-write) [char *]"
Name of thread (up to 40 characters are stored only), mainly for debugging
purposes.
.ie n .IP """PTH_ATTR_DISPATCHES"" (read-write) [""int""]" 4
.el .IP "\f(CWPTH_ATTR_DISPATCHES\fR (read-write) [\f(CWint\fR]" 4
.IX Item "PTH_ATTR_DISPATCHES (read-write) [int]"
In bounded attribute objects, this field is incremented every time the
context is switched to the associated thread.
.ie n .IP """PTH_ATTR_JOINABLE"" (read\-write> [""int""]" 4
.el .IP "\f(CWPTH_ATTR_JOINABLE\fR (read\-write> [\f(CWint\fR]" 4
.IX Item "PTH_ATTR_JOINABLE (read-write> [int]"
The thread detachment type, \f(CW\*(C`TRUE\*(C'\fR indicates a joinable thread,
\&\f(CW\*(C`FALSE\*(C'\fR indicates a detached thread. When a thread is detached,
after termination it is immediately kicked out of the system instead of
inserted into the dead queue.
.ie n .IP """PTH_ATTR_CANCEL_STATE"" (read-write) [""unsigned int""]" 4
.el .IP "\f(CWPTH_ATTR_CANCEL_STATE\fR (read-write) [\f(CWunsigned int\fR]" 4
.IX Item "PTH_ATTR_CANCEL_STATE (read-write) [unsigned int]"
The thread cancellation state, i.e., a combination of \f(CW\*(C`PTH_CANCEL_ENABLE\*(C'\fR or
\&\f(CW\*(C`PTH_CANCEL_DISABLE\*(C'\fR and \f(CW\*(C`PTH_CANCEL_DEFERRED\*(C'\fR or
\&\f(CW\*(C`PTH_CANCEL_ASYNCHRONOUS\*(C'\fR.
.ie n .IP """PTH_ATTR_STACK_SIZE"" (read-write) [""unsigned int""]" 4
.el .IP "\f(CWPTH_ATTR_STACK_SIZE\fR (read-write) [\f(CWunsigned int\fR]" 4
.IX Item "PTH_ATTR_STACK_SIZE (read-write) [unsigned int]"
The thread stack size in bytes. Use lower values than 64 \s-1KB\s0 with great care!
.ie n .IP """PTH_ATTR_STACK_ADDR"" (read-write) [""char *""]" 4
.el .IP "\f(CWPTH_ATTR_STACK_ADDR\fR (read-write) [\f(CWchar *\fR]" 4
.IX Item "PTH_ATTR_STACK_ADDR (read-write) [char *]"
A pointer to the lower address of a chunk of \fImalloc\fR\|(3)'ed memory for the
stack.
.ie n .IP """PTH_ATTR_TIME_SPAWN"" (read-only) [""pth_time_t""]" 4
.el .IP "\f(CWPTH_ATTR_TIME_SPAWN\fR (read-only) [\f(CWpth_time_t\fR]" 4
.IX Item "PTH_ATTR_TIME_SPAWN (read-only) [pth_time_t]"
The time when the thread was spawned.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_TIME_LAST"" (read-only) [""pth_time_t""]" 4
.el .IP "\f(CWPTH_ATTR_TIME_LAST\fR (read-only) [\f(CWpth_time_t\fR]" 4
.IX Item "PTH_ATTR_TIME_LAST (read-only) [pth_time_t]"
The time when the thread was last dispatched.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_TIME_RAN"" (read-only) [""pth_time_t""]" 4
.el .IP "\f(CWPTH_ATTR_TIME_RAN\fR (read-only) [\f(CWpth_time_t\fR]" 4
.IX Item "PTH_ATTR_TIME_RAN (read-only) [pth_time_t]"
The total time the thread was running.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_START_FUNC"" (read-only) [""void *(*)(void *)""]" 4
.el .IP "\f(CWPTH_ATTR_START_FUNC\fR (read-only) [\f(CWvoid *(*)(void *)\fR]" 4
.IX Item "PTH_ATTR_START_FUNC (read-only) [void *(*)(void *)]"
The thread start function.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_START_ARG"" (read-only) [""void *""]" 4
.el .IP "\f(CWPTH_ATTR_START_ARG\fR (read-only) [\f(CWvoid *\fR]" 4
.IX Item "PTH_ATTR_START_ARG (read-only) [void *]"
The thread start argument.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_STATE"" (read-only) [""pth_state_t""]" 4
.el .IP "\f(CWPTH_ATTR_STATE\fR (read-only) [\f(CWpth_state_t\fR]" 4
.IX Item "PTH_ATTR_STATE (read-only) [pth_state_t]"
The scheduling state of the thread, i.e., either \f(CW\*(C`PTH_STATE_NEW\*(C'\fR,
\&\f(CW\*(C`PTH_STATE_READY\*(C'\fR, \f(CW\*(C`PTH_STATE_WAITING\*(C'\fR, or \f(CW\*(C`PTH_STATE_DEAD\*(C'\fR
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_EVENTS"" (read-only) [""pth_event_t""]" 4
.el .IP "\f(CWPTH_ATTR_EVENTS\fR (read-only) [\f(CWpth_event_t\fR]" 4
.IX Item "PTH_ATTR_EVENTS (read-only) [pth_event_t]"
The event ring the thread is waiting for.
This can be queried only when the attribute object is bound to a thread.
.ie n .IP """PTH_ATTR_BOUND"" (read-only) [""int""]" 4
.el .IP "\f(CWPTH_ATTR_BOUND\fR (read-only) [\f(CWint\fR]" 4
.IX Item "PTH_ATTR_BOUND (read-only) [int]"
Whether the attribute object is bound (\f(CW\*(C`TRUE\*(C'\fR) to a thread or not (\f(CW\*(C`FALSE\*(C'\fR).
.PP
The following \s-1API\s0 functions can be used to handle the attribute objects:
.IP "pth_attr_t \fBpth_attr_of\fR(pth_t \fItid\fR);" 4
.IX Item "pth_attr_t pth_attr_of(pth_t tid);"
This returns a new attribute object \fIbound\fR to thread \fItid\fR. Any queries on
this object directly fetch attributes from \fItid\fR. And attribute modifications
directly change \fItid\fR. Use such attribute objects to modify existing threads.
.IP "pth_attr_t \fBpth_attr_new\fR(void);" 4
.IX Item "pth_attr_t pth_attr_new(void);"
This returns a new \fIunbound\fR attribute object. An implicit \fIpth_attr_init()\fR is
done on it. Any queries on this object just fetch stored attributes from it.
And attribute modifications just change the stored attributes. Use such
attribute objects to pre-configure attributes for to be spawned threads.
.IP "int \fBpth_attr_init\fR(pth_attr_t \fIattr\fR);" 4
.IX Item "int pth_attr_init(pth_attr_t attr);"
This initializes an attribute object \fIattr\fR to the default values:
\&\f(CW\*(C`PTH_ATTR_PRIO\*(C'\fR := \f(CW\*(C`PTH_PRIO_STD\*(C'\fR, \f(CW\*(C`PTH_ATTR_NAME\*(C'\fR := `\f(CW\*(C`unknown\*(C'\fR',
\&\f(CW\*(C`PTH_ATTR_DISPATCHES\*(C'\fR := \f(CW0\fR, \f(CW\*(C`PTH_ATTR_JOINABLE\*(C'\fR := \f(CW\*(C`TRUE\*(C'\fR,
\&\f(CW\*(C`PTH_ATTR_CANCELSTATE\*(C'\fR := \f(CW\*(C`PTH_CANCEL_DEFAULT\*(C'\fR,
\&\f(CW\*(C`PTH_ATTR_STACK_SIZE\*(C'\fR := 64*1024 and
\&\f(CW\*(C`PTH_ATTR_STACK_ADDR\*(C'\fR := \f(CW\*(C`NULL\*(C'\fR. All other \f(CW\*(C`PTH_ATTR_*\*(C'\fR attributes are
read-only attributes and don't receive default values in \fIattr\fR, because they
exists only for bounded attribute objects.
.IP "int \fBpth_attr_set\fR(pth_attr_t \fIattr\fR, int \fIfield\fR, ...);" 4
.IX Item "int pth_attr_set(pth_attr_t attr, int field, ...);"
This sets the attribute field \fIfield\fR in \fIattr\fR to a value
specified as an additional argument on the variable argument
list. The following attribute \fIfields\fR and argument pairs can
be used:
.Sp
.Vb 7
\& PTH_ATTR_PRIO int
\& PTH_ATTR_NAME char *
\& PTH_ATTR_DISPATCHES int
\& PTH_ATTR_JOINABLE int
\& PTH_ATTR_CANCEL_STATE unsigned int
\& PTH_ATTR_STACK_SIZE unsigned int
\& PTH_ATTR_STACK_ADDR char *
.Ve
.IP "int \fBpth_attr_get\fR(pth_attr_t \fIattr\fR, int \fIfield\fR, ...);" 4
.IX Item "int pth_attr_get(pth_attr_t attr, int field, ...);"
This retrieves the attribute field \fIfield\fR in \fIattr\fR and stores its
value in the variable specified through a pointer in an additional
argument on the variable argument list. The following \fIfields\fR and
argument pairs can be used:
.Sp
.Vb 10
\& PTH_ATTR_PRIO int *
\& PTH_ATTR_NAME char **
\& PTH_ATTR_DISPATCHES int *
\& PTH_ATTR_JOINABLE int *
\& PTH_ATTR_CANCEL_STATE unsigned int *
\& PTH_ATTR_STACK_SIZE unsigned int *
\& PTH_ATTR_STACK_ADDR char **
\& PTH_ATTR_TIME_SPAWN pth_time_t *
\& PTH_ATTR_TIME_LAST pth_time_t *
\& PTH_ATTR_TIME_RAN pth_time_t *
\& PTH_ATTR_START_FUNC void *(**)(void *)
\& PTH_ATTR_START_ARG void **
\& PTH_ATTR_STATE pth_state_t *
\& PTH_ATTR_EVENTS pth_event_t *
\& PTH_ATTR_BOUND int *
.Ve
.IP "int \fBpth_attr_destroy\fR(pth_attr_t \fIattr\fR);" 4
.IX Item "int pth_attr_destroy(pth_attr_t attr);"
This destroys a attribute object \fIattr\fR. After this \fIattr\fR is no
longer a valid attribute object.
.SS "Thread Control"
.IX Subsection "Thread Control"
The following functions control the threading itself and make up the main \s-1API\s0
of the \fBPth\fR library.
.IP "pth_t \fBpth_spawn\fR(pth_attr_t \fIattr\fR, void *(*\fIentry\fR)(void *), void *\fIarg\fR);" 4
.IX Item "pth_t pth_spawn(pth_attr_t attr, void *(*entry)(void *), void *arg);"
This spawns a new thread with the attributes given in \fIattr\fR (or
\&\f(CW\*(C`PTH_ATTR_DEFAULT\*(C'\fR for default attributes \- which means that thread priority,
joinability and cancel state are inherited from the current thread) with the
starting point at routine \fIentry\fR; the dispatch count is not inherited from
the current thread if \fIattr\fR is not specified \- rather, it is initialized
to zero. This entry routine is called as `pth_exit(\fIentry\fR(\fIarg\fR))' inside
the new thread unit, i.e., \fIentry\fR's return value is fed to an implicit
\&\fIpth_exit\fR\|(3). So the thread can also exit by just returning. Nevertheless
the thread can also exit explicitly at any time by calling \fIpth_exit\fR\|(3). But
keep in mind that calling the \s-1POSIX\s0 function \fIexit\fR\|(3) still terminates the
complete process and not just the current thread.
.Sp
There is no \fBPth\fR\-internal limit on the number of threads one can spawn,
except the limit implied by the available virtual memory. \fBPth\fR internally
keeps track of thread in dynamic data structures. The function returns
\&\f(CW\*(C`NULL\*(C'\fR on error.
.IP "int \fBpth_once\fR(pth_once_t *\fIctrlvar\fR, void (*\fIfunc\fR)(void *), void *\fIarg\fR);" 4
.IX Item "int pth_once(pth_once_t *ctrlvar, void (*func)(void *), void *arg);"
This is a convenience function which uses a control variable of type
\&\f(CW\*(C`pth_once_t\*(C'\fR to make sure a constructor function \fIfunc\fR is called only once
as `\fIfunc\fR(\fIarg\fR)' in the system. In other words: Only the first call to
\&\fIpth_once\fR\|(3) by any thread in the system succeeds. The variable referenced via
\&\fIctrlvar\fR should be declared as `\f(CW\*(C`pth_once_t\*(C'\fR \fIvariable-name\fR =
\&\f(CW\*(C`PTH_ONCE_INIT\*(C'\fR;' before calling this function.
.IP "pth_t \fBpth_self\fR(void);" 4
.IX Item "pth_t pth_self(void);"
This just returns the unique thread handle of the currently running thread.
This handle itself has to be treated as an opaque entity by the application.
It's usually used as an argument to other functions who require an argument of
type \f(CW\*(C`pth_t\*(C'\fR.
.IP "int \fBpth_suspend\fR(pth_t \fItid\fR);" 4
.IX Item "int pth_suspend(pth_t tid);"
This suspends a thread \fItid\fR until it is manually resumed again via
\&\fIpth_resume\fR\|(3). For this, the thread is moved to the \fB\s-1SUSPENDED\s0\fR queue
and this way is completely out of the scheduler's event handling and
thread dispatching scope. Suspending the current thread is not allowed.
The function returns \f(CW\*(C`TRUE\*(C'\fR on success and \f(CW\*(C`FALSE\*(C'\fR on errors.
.IP "int \fBpth_resume\fR(pth_t \fItid\fR);" 4
.IX Item "int pth_resume(pth_t tid);"
This function resumes a previously suspended thread \fItid\fR, i.e. \fItid\fR
has to stay on the \fB\s-1SUSPENDED\s0\fR queue. The thread is moved to the
\&\fB\s-1NEW\s0\fR, \fB\s-1READY\s0\fR or \fB\s-1WAITING\s0\fR queue (dependent on what its state was
when the \fIpth_suspend\fR\|(3) call were made) and this way again enters the
event handling and thread dispatching scope of the scheduler. The
function returns \f(CW\*(C`TRUE\*(C'\fR on success and \f(CW\*(C`FALSE\*(C'\fR on errors.
.IP "int \fBpth_raise\fR(pth_t \fItid\fR, int \fIsig\fR)" 4
.IX Item "int pth_raise(pth_t tid, int sig)"
This function raises a signal for delivery to thread \fItid\fR only. When one
just raises a signal via \fIraise\fR\|(3) or \fIkill\fR\|(2), its delivered to an arbitrary
thread which has this signal not blocked. With \fIpth_raise\fR\|(3) one can send a
signal to a thread and its guarantees that only this thread gets the signal
delivered. But keep in mind that nevertheless the signals \fIaction\fR is still
configured \fIprocess\fR\-wide. When \fIsig\fR is 0 plain thread checking is
performed, i.e., `\f(CW\*(C`pth_raise(tid, 0)\*(C'\fR' returns \f(CW\*(C`TRUE\*(C'\fR when thread \fItid\fR
still exists in the \fB\s-1PTH\s0\fR system but doesn't send any signal to it.
.IP "int \fBpth_yield\fR(pth_t \fItid\fR);" 4
.IX Item "int pth_yield(pth_t tid);"
This explicitly yields back the execution control to the scheduler thread.
Usually the execution is implicitly transferred back to the scheduler when a
thread waits for an event. But when a thread has to do larger \s-1CPU\s0 bursts, it
can be reasonable to interrupt it explicitly by doing a few \fIpth_yield\fR\|(3) calls
to give other threads a chance to execute, too. This obviously is the
cooperating part of \fBPth\fR. A thread \fIhas not\fR to yield execution, of
course. But when you want to program a server application with good response
times the threads should be cooperative, i.e., when they should split their \s-1CPU\s0
bursts into smaller units with this call.
.Sp
Usually one specifies \fItid\fR as \f(CW\*(C`NULL\*(C'\fR to indicate to the scheduler that it
can freely decide which thread to dispatch next. But if one wants to indicate
to the scheduler that a particular thread should be favored on the next
dispatching step, one can specify this thread explicitly. This allows the
usage of the old concept of \fIcoroutines\fR where a thread/routine switches to a
particular cooperating thread. If \fItid\fR is not \f(CW\*(C`NULL\*(C'\fR and points to a \fInew\fR
or \fIready\fR thread, it is guaranteed that this thread receives execution
control on the next dispatching step. If \fItid\fR is in a different state (that
is, not in \f(CW\*(C`PTH_STATE_NEW\*(C'\fR or \f(CW\*(C`PTH_STATE_READY\*(C'\fR) an error is reported.
.Sp
The function usually returns \f(CW\*(C`TRUE\*(C'\fR for success and only \f(CW\*(C`FALSE\*(C'\fR (with
\&\f(CW\*(C`errno\*(C'\fR set to \f(CW\*(C`EINVAL\*(C'\fR) if \fItid\fR specified an invalid or still not
new or ready thread.
.IP "int \fBpth_nap\fR(pth_time_t \fInaptime\fR);" 4
.IX Item "int pth_nap(pth_time_t naptime);"
This functions suspends the execution of the current thread until \fInaptime\fR
is elapsed. \fInaptime\fR is of type \f(CW\*(C`pth_time_t\*(C'\fR and this way has theoretically
a resolution of one microsecond. In practice you should neither rely on this
nor that the thread is awakened exactly after \fInaptime\fR has elapsed. It's