9.2. Multi-threaded Network Server Programming¶
This section relates to the material from chapters 21 of The Text Book.
First, let’s define some operating systems concepts that are needed for server programming.
9.2.1. Processes and Threads¶
- A process is a program in execution
- A thread is one line of execution within a process. A process may contain many threads.
- Thread creation has much less overhead than process creation, especially in Windows.
- Each thread has its own stack (local variables), but share global variables with the other threads.
- Global variables allow threads to share information and communicate with one another.
- Shared data introduces the need for synchronization, which is a can of worms.
- See Multi-threaded Servers for a discussion of the use of threads in developing a chat server.
9.2.2. Creating threads in Python¶
- Create Thread instance, passing in a function
- Create Thread instance, passing in a class
- Subclass Thread and create subclass instance
The first method is sufficient for most of our needs. The chat server (discussed later) can be implemented with each thread being a function, but the graphical chat client program, which I developed, uses option three, a subclass of
- target (callable function) – identifies the code (function) for the new thread to execute
- args (list) – is the arguments to pass to the target function
Manage the persistence of the child thread relative to the parent. n of True or 1 means that the child thread dies if the parent dies first. n of False or 0 means that the child thread can keep running after parent is finished.
Begin execution of the thread now.
The current (parent) thread should suspend until the child thread terminates.
import threading . . . t = threading.Thread( target = threadcode, args = [arg1, arg2] ) t.setDaemon(1) t.start() ... t.join()
9.2.3. Three parts of a multi-threaded server¶
- Listen and accept socket connections
- Create and start child threads
- Infinite loop
- Receive data from client
- Send data to client
- Call synchronized code as needed
Synchronized access to shared data
- Provides protected access to shared global data, which are often held in a global class, which contains the synchronization algorithms, as well as the global data.
- Uses synchronization tools – Locks, semaphores, conditional waits (also called monitors) See Synchronization tools (some of them), below.
Only one thread can update global data at a time
Multiple threads reading global data is allowed, as long as it is not possible for the data to change while being read.
Critical code section – that section of the code which accesses the shared global data.
Single thread access to the critical section is easy, just acquire and release one lock.
Multiple thread access to a critical section is tricky
There are known solutions to many challenging synchronization problems.
Hardest part is framing the problem in terms of a solved classic synchronization problem – classic problems include:
- producers and consumers
- readers and writers
- sleepy barber
- three smokers
- one lane bridge
- dining philosophers
- etc., ...
Take Operating Systems class or study parallel programming. Many reference books explain these classic problems.
Some Python modules, such as Queue, provide implementations of classic synchronization problems.
9.2.5. Synchronization tools (some of them)¶
A simple lock used to limit access to one thread
A higher level abstraction of the
Semaphore. It allows a thread to wait and be signaled by another thread based on some condition possibly becoming True.
Here is how to restrict a critical section to one thread at a time:
import threading L = threading.Lock() L.acquire() # The critical section ... L.release()
The following code allows up to five threads in the critical section at one time:
import threading S = threading.Semaphore(5) S.acquire() # The critical section ... ... S.release()
Condition provides a level of abstraction,
which can greatly simplify the solution to many problems. Notice that the
wait() statement is inside a
while loop. This
ensures that whatever logical condition we are waiting on is still true once
the thread is running. It could be that another thread saw a condition,
which prompted it to issue a
notify() statement, but by the time our
thread returned from
wait(), the condition is no longer true. The
evaluation of the condition must be done while holding a mutual exclusion
lock. It should be pointed out that the Python
a mutual exclusion lock, which may be manually acquired or released. The
wait() method releases the lock, and then blocks until it is awakened
notifyAll() call for the same condition
variable in another thread. Once awakened, it re-acquires the lock and
Condition is especially useful for problems such as the producer –
consumer (bounded buffer) problem, where each thread may only proceed if
certain resources are available. The example below uses a global boolean
variable to coordinate access to the critical section, but a boolean
function or class method could also be used.
import threading global available C = threading.Condition() C.acquire() while not available: C.wait() available = False C.release() # The critical section. Note that no locks are held. C.acquire() available = True C.notify() C.release() # alternately, we could notify all waiting threads # C.notifyAll()