ITea Talks with Boyko Zhelev: Multi-tasking and Multi-threading in Modern Operating Systems

ITea Talks with Boyko Zhelev: Multi-tasking and Multi-threading in Modern Operating Systems

Modern operating systems are engineered for efficiency, responsiveness, and reliability. Two key pillars that underpin these objectives are multi-tasking and multi-threading. Although they both enable concurrent operations, they approach parallel execution from different angles and serve distinct purposes within the computing landscape.

What are the main differences between Multi-tasking and Multi-threading? Why is it important to understand them? Why do we use threads? What are race conditions and how do we prevent them? What is important to know about non-volatile and volatile variables? What should we know about atomic variables? When should we use them? Why is the topic about Concurrent and Multi-threading programming with Java so important? Today, we answer these questions with Boyko Zhelev - Software Engineer at adesso Bulgaria.


Article content

Hello Boyko! Let's start today's interview with an overview of the main terms Multi-tasting and Multi-threading. What should we know about them?

Hello! Let's take a closer look and compare them.

Multi-tasking

Multi-tasking is the operating system’s ability to run multiple processes concurrently. Each process operates as an independent entity in its own protected memory space, ensuring that one application’s instability or crash does not compromise another. The OS divides CPU time among these processes using a technique known as time-slicing. Here’s how it works:

  • Time-Slicing: The CPU is allocated a short burst of time — its “time slice” — to execute each process in turn. By switching tasks rapidly, the system gives the illusion of simultaneous execution even on a single-core processor.
  • Process Isolation: Each process runs in its separate memory space with its own allocated resources. This isolation enhances stability and security because processes cannot easily interfere with one another.
  • Benefits:

Enhanced System Efficiency: By juggling several processes, idle CPU time is minimised.

Improved User Productivity: Users can run a web browser, text editor, and music player concurrently, each operating smoothly.

Optimal Resource Utilisation: The operating system dynamically manages resources, ensuring that every process gets a fair share of computing power.

Visual Metaphor: Imagine a skilled juggler who keeps several balls (processes) in the air. Each ball gets its moment of focus (time slice), and the process of juggling ensures that everyball is managed carefully, creating a seamless performance.

Multi-threading

Multi-threading operates within the realm of a single process. Here, a process is decomposed into multiple threads that share the same memory space and resources. This approach is ideal for tasks where different components need to execute simultaneously while frequently communicating and cooperating with each other.

  • Thread Sharing: Because threads belong to the same process, they can easily exchange information without the overhead of inter-process communication. This shared context makes synchronisation more straightforward.
  • Lightweight Execution: Threads are less resource-intensive than processes. Their creation, synchronisation and context switching occur with lower overhead, making them particularly suitable for tasks that require high concurrency within a single application.
  • Benefits:

Improved Application Performance: Tasks such as rendering a user interface, performing background computations and handling network requests can run concurrently.

Rapid Communication: Sharing the same memory space enables fast and efficient data exchange between threads.

Visual Metaphor: Picture a dedicated team working in a single office. Each team member(thread) has a specialised role and since they all share the same workspace (memory), collaboration is seamless and efficient, driving the project forward with agility.


Article content

What do you think are the biggest and most important differences between them? Why is it important to understand them?

Understanding the distinctions between multi-tasking and multi-threading is essential to appreciating their roles in computing:

Article content

  • Multi-tasking is ideal when you need to work on different applications simultaneously. For instance, you might be editing a document, streaming music, and browsing the web — all at the same time.
  • Multi-threading excels in scenarios where a single application must handle multiple operations in parallel. A prime example is a web browser that loads images, processes scripts, and manages user interactions all within one cohesive process. 

 

Article content

Why do we use threads?

Threads are a fundamental part of Java's concurrency model. Here's why they are so valuable:

1. Improving Application Performance

Threads enable your application to perform multiple operations at once, thereby improving performance and responsiveness. For example:

  • Parallel Tasks: Tasks that can run in parallel, such as downloading multiple files, can be handled more efficiently.
  • Resource Utilisation: Threads can keep CPU cores busy, improving overall system utilisation.


2. Enhanced User Experience

Threads can keep your user interfaces responsive by offloading time-consuming tasks to background threads. For instance:

  • Smooth UI: A thread can handle UI updates while another thread processes data, ensuring the application remains responsive to user input.
  • Asynchronous Operations: Background threads can manage operations like fetching data from the network without freezing the main UI thread.


3. Efficient Resource Management

Threads can share resources within a single process, making it easier to manage memory and other resources efficiently:

  • Memory Sharing: Threads of the same process share memory, leading to lower memory consumption compared to using multiple processes.
  • Coordination: Threads can easily communicate and coordinate with each other within the same process.

Threads are a powerful tool in Java, enabling concurrent execution and enhancing application performance, responsiveness, and resource management. By understanding how to create and manage threads, we can develop more efficient and responsive applications.

Article content

Can you tell us more about race conditions? How can we prevent them?

A race condition occurs in a concurrent system when two or more threads or processes access shared resources at the same time and the outcome of the execution depends on the order in which the access occurs. This can lead to unpredictable and erroneous behaviour, as the final result may vary based on the timing of the thread execution.

Example: Imagine two threads attempting to increment the same counter variable. If both threads read the current value of the counter simultaneously, increment it, and then write it back, the counter would be incremented only once instead of twice.

To prevent race conditions, you need to ensure that access to shared resources is properly synchronised. Here are some techniques to achieve this in Java:

1. Using the synchronised Keyword

The synchronised keyword can be used to control access to methods or code blocks, ensuring that only one thread can execute the synchronised code at a time.

2. Using Locks

  • Understanding Locks in Concurrency

In concurrent programming, locks are mechanisms used to control access to shared resources by multiple threads. They ensure that only one thread can access the resource at a time, preventing race conditions and ensuring data consistency.

  • What Are Locks?

A lock provides a way to enforce mutual exclusion, allowing only one thread to enter a critical section of code or access a shared resource at a time. When a thread acquires a lock, it gains exclusive access to the resource, and no other thread can access it until the lock is released.

  • Types of Locks

There are several types of locks, each with its own characteristics and use cases:

- Mutex (Mutual Exclusion) Lock:

○ A basic type of lock that ensures mutual exclusion.

○ Only one thread can hold the lock at a time.

○ Used to protect critical sections and ensure thread safety.

- Read-Write Lock:

○ Allows multiple threads to read a shared resource concurrently but ensures that only one thread can write to it at a time.

○ Enhances performance in scenarios with more frequent read operations compared to write operations.

- Spin Lock:

○ A lock where a thread repeatedly checks (spins) until the lock becomes available.

○ Minimises context switching overhead but can lead to high CPU usage if many threads are waiting.

- Recursive Lock:

○ Allows the same thread to acquire the lock multiple times without causing a deadlock.

○ Useful in scenarios where a thread needs to re-enter a critical section it already owns.

3. Using Atomic Variables

The java.util.concurrent.atomic package provides atomic variables that support lock-free thread-safe programming. These variables are automatically updated atomically, avoiding the need for explicit synchronisation.


Race conditions can lead to unpredictable behaviour and hard-to-debug issues in concurrent applications. By using synchronisation techniques such as the synchronised keyword, ReentrantLock, or atomic variables, you can ensure that shared resources are accessed in a thread-safe manner, preventing race conditions.

  

Article content

What is important to know about non-volatile and volatile variables?

Understanding the distinction between non-volatile and volatile variables is crucial when working with concurrent programming in Java. Let's explore the key differences and their implications:

1. Non-Volatile Variables

Non-volatile variables are the default in Java. They do not have any special concurrency guarantees and their use in a multithreaded context can lead to visibility issues.

Key Points:

  • Visibility Issues: Changes made by one thread may not be immediately visible to other threads. This is because threads may cache values in their local memory, causing other threads to read stale data.
  • Synchronisation Required: To ensure visibility and consistency, you must use synchronisation mechanisms (e.g., synchronised blocks, locks) when accessing non-volatile variables in a concurrent environment.
  • Usage Scenario: Non-volatile variables are suitable for use in single-threaded applications or when proper synchronisation mechanisms are in place for concurrent access.


2. Volatile Variables

Volatile variables provide a lightweight synchronization mechanism. The volatile keyword ensures that a variable is always read from and written to main memory, preventing threads from caching its value.

Key Points:

  • Visibility Guarantee: Changes made to a volatile variable by one thread are immediately visible to all other threads. The volatile keyword ensures that updates tothe variable are propagated predictably.
  • No Atomicity: While volatile ensures visibility, it does not guarantee atomicity for compound operations (e.g., incrementing a value). For atomicity, additional synchronisation is required.
  • Usage Scenario: Volatile variables are suitable for flags, state indicators, or other simple variables where visibility is the primary concern and atomicity is not required.


Conclusion

Understanding the behaviour of non-volatile and volatile variables is essential for writing correct and efficient concurrent programs. Non-volatile variables require explicit synchronisation to ensure visibility and consistency, while volatile variables provide a lightweight mechanism for ensuring visibility. However, volatile does not guarantee atomicity for compound operations, so additional synchronisation may still be necessary.

Article content

What should we know about atomic variables?

Atomic variables, available in the java.util.concurrent.atomic package, offer a lock-free and thread-safe mechanism to handle operations on single variables. These classes — like AtomicInteger, AtomicLong, AtomicBoolean, and AtomicReference — ensure that a read-modify-write operation (such as an increment) is performed atomically. This means that the operation happens in a single, indivisible step, avoiding the traditional pitfalls of race conditions.

Key Characteristics

  • Lock-Free Mechanism: Atomic variables internally use low-level hardware support to ensure that changes are applied atomically. This bypasses the need for explicit synchronisation or locks in many cases, reducing overhead.
  • Visibility Guarantees: Changes made to an atomic variable by one thread are immediately visible to all other threads, thanks to the memory consistency effects defined in the Java Memory Model. This is similar to the visibility guarantee of a volatile variable, but comes with the added benefit of atomicity for compound actions.
  • Built-in Atomic Methods: Methods like getAndIncrement(), incrementAndGet(), getAndAdd(), and compareAndSet() allow us to perform common operations safely under concurrent access without resorting to locks.


Article content

When should we use atomic variables?

Atomic variables are particularly useful in scenarios where you need simple, thread-safe operations on single variables. Here’s when they shine:

  • Simple Counters or Flags: When you need to track counts, flags, or other singular state items where the operations (such as increment or set) must be atomic. For instance, maintaining a counter of processed events or toggling a boolean state asynchronously.
  • Performance-Sensitive Concurrency: In high-concurrency applications where you wish to minimise the overhead of thread contention and context switching that comes with traditional synchronisation mechanisms (e.g., synchronised blocks). Because atomic classes use low-level CPU instructions, they can be significantly faster, especially under heavy contention.
  • Non-Compound Operations: If your operation involves a single variable and does not require coordination with updates to multiple variables, atomic variables are ideal. For complex updates involving multiple state items, explicit locks or other higher-level synchronisation constructs might be more appropriate.

Atomic variables offer an elegant, efficient solution for many common concurrency problems in Java. They make it simpler to implement thread-safe operations on single variables without incurring the overhead of traditional locks. By leveraging these classes, you can write cleaner and leaner concurrent code — especially when dealing with simple counters, flags, or state indicators.

However, always analyze the complexity of your operations. For simple, singular updates, atomic variables are ideal; but for multi-variable, compound actions, you might need more robust synchronisation.

Article content

Why is the topic about Concurrent and Multi-threading programming with Java so important?

 The importance of concurrent and multi-threading programming in Java cannot be overstated. It is critical for leveraging modern hardware, building responsive and scalable applications, and mastering the intricacies of software design in complex environments. By understanding and applying these concepts, developers are not only able to create high-performance applications, but also anticipate and solve issues that arise in real-world, multi-user, high-concurrency systems.

Learning these skills ensures that you’re prepared to tackle modern computing challenges head-on — whether you're developing for desktop, server or mobile platforms.

To view or add a comment, sign in

More articles by adesso Bulgaria

Insights from the community

Others also viewed

Explore topics