Adi Levin's Blog for programmers

May 31, 2009

What is a process?

Filed under: Multithreading — Adi Levin @ 9:17 pm
Tags: , , ,

To understand multithreading, it is important to understand the nature of processes and threads. A process is a running program. Each program can have a number of processes running it. For example, a developer may use several instances of Visual Studio in parallel. Each instance runs the same program, but in the context of a different process.

A process contains the resources needed to run a program. The most important things in it are

  1. Virtual address space
  2. Executable code (images of the EXE and DLLs)
  3. Handles to Windows objects (Windows, Controls, Threads, Files, Events etc…)
  4. At least one thread.

A thread is an entity that can be scheduled for execution. Whenever a process starts, it creates one thread called the primary thread. When that thread ends its execution, the process ends. Processes may have lots of threads, all of which share the resources of the process.

Virtual Address Space

I should say a few words about the meaning of virtual address space. For a long time I did not understand the differences between physical memory and virtual memory, and the term address space didn’t mean anything to me. After asking different people for more explanations, I realized that many other people don’t have a better understanding either.

First of all, it is important to distinguish between memory and address space. Address space is a range of addresses that can be used as pointers to data. When you allocate an array of numbers, you allocate a consecutive range of addresses in which you can write and read data. But the data itself is stored either in memory or on the disk (page file), depending on the way that the operating system manages the memory. The operating system maps the address of every valid pointer to a place in memory or in the page file. Moreover, at the hardware level, addresses from the process’ address space can be mapped to the processor cache for quick retrieval. A programmer almost always is unaware of where the data really resides. A programmer works with addresses, and not directly with physical memory.

In Win32, pointers are 32 bits long (4 bytes), which means they can only have 2^32 (=4GB) different values. The upper two 2GB addresses are reserved for executable code (images of EXE files and DLLs that are are loaded by the when they are needed). This means that we only have 2GB of address space left that can be used as data addresses in our program.

The fact that the address space is only 2GB long has nothing to do with the amount of physical memory on the machine. If the physical memory (RAM) is only 256Mb, our program can still allocate arrays whose total volume is 2GB, but accessing the data in these arrays will result in paging (copying data from physical memory to the page file on the disk). If the physical memory is 4GB, our program still won’t be able to exploit more than 2GB of it.

When allocation (malloc command) fails, this doesn’t mean that we ran out of physical memory. It also doesn’t mean that we requested more than 2GB of address space. It only means that we failed to find a consecutive range of addresses inside our 2GB address space, with the required size, that has not been allocated yet. If your program performs a lot of allocations and deallocations, the address space can become fragmented, i.e. even if you’re only taking a total of 1GB of virtual address space, it can be spread over the range of 2GB in a way that doesn’t leave consecutive ranges of more than 1Mb. In this situation, any allocation of an array of size more than 1MB will throw an exception.

When your program suffers from such allocation problems, you don’t have to reduce your memory consumption, but you may be forced to break your long arrays into smaller blocks. Another solution is to move to 64 bits. 64 bit programs (e.g. on Vista 64 operating system) use 64 bits (8 bytes) to represent pointers, and so their address space is of size 2^64, which is a huge number. This means that they will not fail to allocate large consecutive arrays.

A good link about Windows memory management

If you want to learn more about memory-related terms, such as Virtual Memory, Page File, Working-Set of a process etc…, take a look at this link: http://shsc.info/WindowsMemoryManagement. Also, refer to MSDN article on Memory Management in Windows NT.

Advertisements

May 30, 2009

Why do we have to use multithreading?

Multithreading exists for a long while now, long before I ever thought about it. But only in recent years it became an essential part of the knowledge that, I believe, every programmer should have. In particular, for people like me, who make a living writing complicated computational algorithms, multithreading is a basic tool.

Performance

Back in the good-old days, we were counting on CPUs to double their performance every year. We knew that the code we write today, will execute much faster in the future. This is no longer the case. CPUs can be made to work faster, but they will consume a lot of power. The issue of power and cooling is important for many configurations – from notebooks to servers. Intel, who makes processor chips, is now talking about Power-Performance instead of just Performance, i.e. they are looking for ways to increase the ratio between performance and power instead of just ways to increase performance.

It just so happens, that a multi-processor or multi-core machine is much better at power-performance ratio than a single processor machine with the same performance. This is why every computer we buy these days it at least dual-core. A dual-core machine can perform two computing tasks simultaneously. Quad-core machines are also becoming common nowadays.

This makes our lives, as programmers, more interesting. Traditionally, the programs that we write are sequential. The program is interpreted as a sequence of instructions that are executed one after the other on the CPU. If we keep doing what we’ve always done, and not respect the fact that our computer has now more than one computing core, we will effectively use only one half (or one quarter) of the computing power in our hands. It is crucial, for many applications, to harness all of the available computing power.

Scalability is also an issue. In the context of multi-core machine, scalability means that the same program should run faster on a machine with more processing cores. As a software provider, you want to be able to tell your customers that if they are unhappy with the performance, they can buy a stronger computer, and then they’ll get better performance. Since computers are not improving their frequency any more – only the number of processing cores, scalability is not trivial at all.

Responsiveness

Any program with user interface works as follows: Wait for input (e.g. from mouse, keyboard, Windows message, etc…), then perform a computation, show some output, and go back to a state of waiting for input. In a single-threaded program, if the computation is taking 10 seconds, the user will not be able to interact with the application for the duration of 10 seconds. The poor user will not be able to abort the computation in the middle, or to even minimize the window, so even his desktop will not be accessible. At that time, the window will not refresh properly.

A responsive application always allows users to abort computations in the middle, minimize the window, draw properly. Ideally, the user should be able to fully interact with the application, during a lengthy computation. Take Visual Studio as an example. While building (compiling) the project, you can open files, edit them, and of course, stop the build process in the middle.

A good way to achieve such responsiveness, is to allocate different threads for the user interface and for long computations. The threads communicate between them, but run in parallel to one-another.

Blog at WordPress.com.