This document provides an overview of programming shared address space platforms and parallel programming models. It discusses thread basics such as creation and termination in Pthreads. It also covers synchronization primitives like mutex locks and condition variables. Higher-level constructs like read-write locks and barriers are built using these basic synchronization methods. The document aims to explain key concepts for expressing concurrency and parallelism in shared memory systems.
The document discusses using threads instead of processes to handle concurrent connections in a server. It introduces POSIX threads (Pthreads) and describes five basic thread functions: pthread_create to create new threads; pthread_join to wait for a thread to terminate; pthread_self to get the calling thread ID; pthread_detach to change a thread from joinable to detached; and pthread_exit for a thread to terminate. It then shows how to recode a client and server example using threads instead of fork, including properly passing arguments to new threads.
The document discusses different threading models used in operating systems including many-to-one, one-to-one, and many-to-many. It also covers POSIX threads (pthreads) which provide a standard API for multithreaded programming, and describes functions for thread creation, management, synchronization and more.
This document provides information on multithreading, networking, and other Java concepts. It discusses how to create threads in Java by extending the Thread class or implementing Runnable, and describes methods like sleep(), wait(), notify() etc. It also covers networking topics like sockets, servers, clients and protocols. Examples of socket programming and a chatting program using sockets are included. Finally, it discusses classes like URL, URLConnection and HttpURLConnection for working with URLs.
This document provides an overview of parallel programming with OpenMP. It discusses how OpenMP allows users to incrementally parallelize serial C/C++ and Fortran programs by adding compiler directives and library functions. OpenMP is based on the fork-join model where all programs start as a single thread and additional threads are created for parallel regions. Core OpenMP elements include parallel regions, work-sharing constructs like #pragma omp for to parallelize loops, and clauses to control data scoping. The document provides examples of using OpenMP for tasks like matrix-vector multiplication and numerical integration. It also covers scheduling, handling race conditions, and other runtime functions.
Threads are lightweight processes that exist within a process and share most of the process resources like memory and file descriptors. Threads have their own independent flow of control and maintain their own stack pointer, registers, and scheduling properties. The pthreads API provides functions for creating and managing threads in UNIX/Linux environments. Pthreads allow creating threads that execute concurrently and share resources.
The document discusses threads and multithreading in Java. It defines a thread as a flow of execution with a beginning, execution sequence, and end. Multithreading allows multiple threads to run simultaneously within a single program. It describes how to create threads by extending the Thread class or implementing Runnable. The document also discusses thread states, priorities, synchronization, and race conditions that can occur when multiple threads access shared data.
The document discusses multi-threaded programming and queues in Python. It describes how to create and manage threads using the thread and threading modules, including starting new threads and getting information about active threads. It also explains how to use queues to coordinate work between threads, including putting and getting items from the queue and waiting for tasks to complete. An example demonstrates creating worker threads that process tasks from a queue and notify the queue when finished.
The document discusses multithreading concepts in Java, including:
- Multithreading allows multiple tasks in a program to run concurrently by using threads. Threads are lightweight sub-processes that can run tasks independently and in parallel.
- There are two main ways to create threads in Java: by extending the Thread class or implementing the Runnable interface. The run() method specifies the task to be performed by the thread.
- Thread states include new, ready, running, blocked, and terminated. Thread methods like start(), join(), sleep(), yield(), interrupt() control thread execution and synchronization.
This document provides an introduction to OpenMP, which is an application programming interface that allows for parallel programming on shared memory architectures. OpenMP is a specification for parallel programming and is supported by C, C++, and Fortran. It uses compiler directives to specify parallel regions, shared/private variables, thread synchronization, and work distribution among threads. OpenMP uses a shared memory model and fork-join execution model, where the main thread forks additional threads to perform work and then joins them back together.
REs and Python: Regular Expressions, Sequence Characters in Regular Expressions, Quantifiers in Regular Expressions, Special Characters in Regular Expressions, Using Regular Expressions on Files, Retrieving Information from a HTML File
Threading : Concurrent Programming and GIL, Uses of Threads, Creating Threads in Python, Thread Class Methods, Single Tasking using a Thread, Multitasking using Multiple Threads, Thread Synchronization Deadlock of Threads, Avoiding Deadlocks in a Program, Communication between Threads, Thread Communication using notify() and wait() Methods, Thread Communication using a Queue, Daemon Threads
This document discusses multithreading and threading concepts in .NET. It defines threading as running multiple parts of a program concurrently using threads. Each thread defines a separate execution path. The .NET Framework supports both process-based and thread-based multitasking. Threads are represented by the Thread class and can be created and managed. Methods like Start(), Join(), and Sleep() are used to control thread execution. Thread priorities can be set to influence thread scheduling order.
Thread is an independent path of execution through program code. The JVM gives each thread its own method-call stack to track execution. When multiple threads execute byte-code instruction sequences in the same program concurrently, it is known as multithreading. Java accomplishes multithreading through its Thread class, with each Thread object describing a single thread of execution that occurs in its run() method.
The document discusses threads and multithreading in Java. It defines a thread as a single sequential flow of control that runs within the address space of a process. It notes that threads allow programs to accomplish multiple simultaneous tasks. The document then covers thread states, priorities, synchronization, and provides examples of a producer-consumer problem implemented with and without synchronization.
This document provides an introduction to OpenMP, a standard for parallel programming using shared memory. OpenMP uses compiler directives like #pragma omp parallel to create threads that can access shared data. It uses a fork-join model where the master thread creates worker threads to execute blocks of code in parallel. OpenMP supports work sharing constructs like parallel for loops and sections to distribute work among threads, and synchronization constructs like barriers to coordinate thread execution. Variables can be declared as private to each thread or shared among all threads.
It seemed that long forum debates about methods of measuring algorithm's running time, functions to use and precision that should be expected were over. Unfortunately, we have to return to this question once again. Today we will discuss the question how we should measure speed of a parallel algorithm.
Multithreaded fundamentals
The thread class and runnable interface
Creating a thread
Creating multiple threads
Determining when a thread ends
Thread priorities
Synchronization
Using synchronized methods
The synchronized statement
Thread communication using notify(), wait() and notifyall()
Suspending , resuming and stopping threads
This document discusses multithreading, generators, and decorators in Python. It begins by explaining multithreading and how it allows performing multiple tasks simultaneously through processes and threads. It then provides details on threads in Python, including how to create and use threads. Next, it covers generators in Python by explaining iterators, generator functions, and generator objects. It provides examples of creating generators using both generator functions and generator expressions. Finally, it briefly introduces decorators in Python but does not provide any details.
Generic programming allows writing algorithms that operate on many data types. It avoids duplicating code for each specific type. Generic methods can accept type parameters, allowing a single method to work on different types. Generic classes can also take type parameters, making it possible to define concepts like stacks independently of the element type. Type safety is ensured through type parameters that specify what types are allowed.
The document discusses asynchronous programming in C# and .NET. It covers several approaches for asynchronous programming including threads, tasks, and the async/await keywords. It provides examples of creating and running threads, tasks that return values, waiting for tasks to complete, and chaining multiple tasks. It also covers thread pools, foreground and background threads, thread priorities, thread safety, and using locks to synchronize access to shared resources.
This document discusses MPI (Message Passing Interface) and OpenMP for parallel programming. MPI is a standard for message passing parallel programs that requires explicit communication between processes. It provides functions for point-to-point and collective communication. OpenMP is a specification for shared memory parallel programming that uses compiler directives to parallelize loops and sections of code. It provides constructs for work sharing, synchronization, and managing shared memory between threads. The document compares the two approaches and provides examples of simple MPI and OpenMP programs.
MULTI-THREADING in python appalication.pptxSaiDhanushM
Multithreading allows a processor to execute multiple threads concurrently. It is achieved through frequent context switching between threads, where the state of one thread is saved and another is loaded on an interrupt. This gives the appearance that all threads are running in parallel through multitasking. In Python, a thread is a sequence of instructions that can run independently within a process. Multiple threads can exist within a Python process and share global variables and code, but each has its own register set and local variables. The threading module is used to create and manage threads in Python.
The document discusses parallel program design and parallel programming techniques. It introduces parallel algorithm design based on four steps: partitioning, communication, agglomeration, and mapping. It also covers parallel programming tools including pthreads, OpenMP, and MPI. Common parallel constructs like private, shared, barrier, and reduction are explained. Examples of parallel programs using pthreads and OpenMP are provided.
Threads are lightweight processes as the overhead of switching between threads is less
Synchronization allows only one thread to perform an operation on a object at a time.
Synchronization prevent data corruption
Thread Synchronization-The synchronized methods define critical sections.
You will learn the Deadlock Condition in Threads and Syncronization of Threads
The document discusses threads and multithreading in Java. It defines a thread as a flow of execution with a beginning, execution sequence, and end. Multithreading allows multiple threads to run simultaneously within a single program. It describes how to create threads by extending the Thread class or implementing Runnable. The document also discusses thread states, priorities, synchronization, and race conditions that can occur when multiple threads access shared data.
The document discusses multi-threaded programming and queues in Python. It describes how to create and manage threads using the thread and threading modules, including starting new threads and getting information about active threads. It also explains how to use queues to coordinate work between threads, including putting and getting items from the queue and waiting for tasks to complete. An example demonstrates creating worker threads that process tasks from a queue and notify the queue when finished.
The document discusses multithreading concepts in Java, including:
- Multithreading allows multiple tasks in a program to run concurrently by using threads. Threads are lightweight sub-processes that can run tasks independently and in parallel.
- There are two main ways to create threads in Java: by extending the Thread class or implementing the Runnable interface. The run() method specifies the task to be performed by the thread.
- Thread states include new, ready, running, blocked, and terminated. Thread methods like start(), join(), sleep(), yield(), interrupt() control thread execution and synchronization.
This document provides an introduction to OpenMP, which is an application programming interface that allows for parallel programming on shared memory architectures. OpenMP is a specification for parallel programming and is supported by C, C++, and Fortran. It uses compiler directives to specify parallel regions, shared/private variables, thread synchronization, and work distribution among threads. OpenMP uses a shared memory model and fork-join execution model, where the main thread forks additional threads to perform work and then joins them back together.
REs and Python: Regular Expressions, Sequence Characters in Regular Expressions, Quantifiers in Regular Expressions, Special Characters in Regular Expressions, Using Regular Expressions on Files, Retrieving Information from a HTML File
Threading : Concurrent Programming and GIL, Uses of Threads, Creating Threads in Python, Thread Class Methods, Single Tasking using a Thread, Multitasking using Multiple Threads, Thread Synchronization Deadlock of Threads, Avoiding Deadlocks in a Program, Communication between Threads, Thread Communication using notify() and wait() Methods, Thread Communication using a Queue, Daemon Threads
This document discusses multithreading and threading concepts in .NET. It defines threading as running multiple parts of a program concurrently using threads. Each thread defines a separate execution path. The .NET Framework supports both process-based and thread-based multitasking. Threads are represented by the Thread class and can be created and managed. Methods like Start(), Join(), and Sleep() are used to control thread execution. Thread priorities can be set to influence thread scheduling order.
Thread is an independent path of execution through program code. The JVM gives each thread its own method-call stack to track execution. When multiple threads execute byte-code instruction sequences in the same program concurrently, it is known as multithreading. Java accomplishes multithreading through its Thread class, with each Thread object describing a single thread of execution that occurs in its run() method.
The document discusses threads and multithreading in Java. It defines a thread as a single sequential flow of control that runs within the address space of a process. It notes that threads allow programs to accomplish multiple simultaneous tasks. The document then covers thread states, priorities, synchronization, and provides examples of a producer-consumer problem implemented with and without synchronization.
This document provides an introduction to OpenMP, a standard for parallel programming using shared memory. OpenMP uses compiler directives like #pragma omp parallel to create threads that can access shared data. It uses a fork-join model where the master thread creates worker threads to execute blocks of code in parallel. OpenMP supports work sharing constructs like parallel for loops and sections to distribute work among threads, and synchronization constructs like barriers to coordinate thread execution. Variables can be declared as private to each thread or shared among all threads.
It seemed that long forum debates about methods of measuring algorithm's running time, functions to use and precision that should be expected were over. Unfortunately, we have to return to this question once again. Today we will discuss the question how we should measure speed of a parallel algorithm.
Multithreaded fundamentals
The thread class and runnable interface
Creating a thread
Creating multiple threads
Determining when a thread ends
Thread priorities
Synchronization
Using synchronized methods
The synchronized statement
Thread communication using notify(), wait() and notifyall()
Suspending , resuming and stopping threads
This document discusses multithreading, generators, and decorators in Python. It begins by explaining multithreading and how it allows performing multiple tasks simultaneously through processes and threads. It then provides details on threads in Python, including how to create and use threads. Next, it covers generators in Python by explaining iterators, generator functions, and generator objects. It provides examples of creating generators using both generator functions and generator expressions. Finally, it briefly introduces decorators in Python but does not provide any details.
Generic programming allows writing algorithms that operate on many data types. It avoids duplicating code for each specific type. Generic methods can accept type parameters, allowing a single method to work on different types. Generic classes can also take type parameters, making it possible to define concepts like stacks independently of the element type. Type safety is ensured through type parameters that specify what types are allowed.
The document discusses asynchronous programming in C# and .NET. It covers several approaches for asynchronous programming including threads, tasks, and the async/await keywords. It provides examples of creating and running threads, tasks that return values, waiting for tasks to complete, and chaining multiple tasks. It also covers thread pools, foreground and background threads, thread priorities, thread safety, and using locks to synchronize access to shared resources.
This document discusses MPI (Message Passing Interface) and OpenMP for parallel programming. MPI is a standard for message passing parallel programs that requires explicit communication between processes. It provides functions for point-to-point and collective communication. OpenMP is a specification for shared memory parallel programming that uses compiler directives to parallelize loops and sections of code. It provides constructs for work sharing, synchronization, and managing shared memory between threads. The document compares the two approaches and provides examples of simple MPI and OpenMP programs.
MULTI-THREADING in python appalication.pptxSaiDhanushM
Multithreading allows a processor to execute multiple threads concurrently. It is achieved through frequent context switching between threads, where the state of one thread is saved and another is loaded on an interrupt. This gives the appearance that all threads are running in parallel through multitasking. In Python, a thread is a sequence of instructions that can run independently within a process. Multiple threads can exist within a Python process and share global variables and code, but each has its own register set and local variables. The threading module is used to create and manage threads in Python.
The document discusses parallel program design and parallel programming techniques. It introduces parallel algorithm design based on four steps: partitioning, communication, agglomeration, and mapping. It also covers parallel programming tools including pthreads, OpenMP, and MPI. Common parallel constructs like private, shared, barrier, and reduction are explained. Examples of parallel programs using pthreads and OpenMP are provided.
Threads are lightweight processes as the overhead of switching between threads is less
Synchronization allows only one thread to perform an operation on a object at a time.
Synchronization prevent data corruption
Thread Synchronization-The synchronized methods define critical sections.
You will learn the Deadlock Condition in Threads and Syncronization of Threads
How to Manage Amounts in Local Currency in Odoo 18 PurchaseCeline George
In this slide, we’ll discuss on how to manage amounts in local currency in Odoo 18 Purchase. Odoo 18 allows us to manage purchase orders and invoices in our local currency.
Ancient Stone Sculptures of India: As a Source of Indian HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
How to Share Accounts Between Companies in Odoo 18Celine George
In this slide we’ll discuss on how to share Accounts between companies in odoo 18. Sharing accounts between companies in Odoo is a feature that can be beneficial in certain scenarios, particularly when dealing with Consolidated Financial Reporting, Shared Services, Intercompany Transactions etc.
How To Maximize Sales Performance using Odoo 18 Diverse views in sales moduleCeline George
One of the key aspects contributing to efficient sales management is the variety of views available in the Odoo 18 Sales module. In this slide, we'll explore how Odoo 18 enables businesses to maximize sales insights through its Kanban, List, Pivot, Graphical, and Calendar views.
Transform tomorrow: Master benefits analysis with Gen AI today webinar
Wednesday 30 April 2025
Joint webinar from APM AI and Data Analytics Interest Network and APM Benefits and Value Interest Network
Presenter:
Rami Deen
Content description:
We stepped into the future of benefits modelling and benefits analysis with this webinar on Generative AI (Gen AI), presented on Wednesday 30 April. Designed for all roles responsible in value creation be they benefits managers, business analysts and transformation consultants. This session revealed how Gen AI can revolutionise the way you identify, quantify, model, and realised benefits from investments.
We started by discussing the key challenges in benefits analysis, such as inaccurate identification, ineffective quantification, poor modelling, and difficulties in realisation. Learnt how Gen AI can help mitigate these challenges, ensuring more robust and effective benefits analysis.
We explored current applications and future possibilities, providing attendees with practical insights and actionable recommendations from industry experts.
This webinar provided valuable insights and practical knowledge on leveraging Gen AI to enhance benefits analysis and modelling, staying ahead in the rapidly evolving field of business transformation.
How to Create Kanban View in Odoo 18 - Odoo SlidesCeline George
The Kanban view in Odoo is a visual interface that organizes records into cards across columns, representing different stages of a process. It is used to manage tasks, workflows, or any categorized data, allowing users to easily track progress by moving cards between stages.
The role of wall art in interior designingmeghaark2110
Wall patterns are designs or motifs applied directly to the wall using paint, wallpaper, or decals. These patterns can be geometric, floral, abstract, or textured, and they add depth, rhythm, and visual interest to a space.
Wall art and wall patterns are not merely decorative elements, but powerful tools in shaping the identity, mood, and functionality of interior spaces. They serve as visual expressions of personality, culture, and creativity, transforming blank and lifeless walls into vibrant storytelling surfaces. Wall art, whether abstract, realistic, or symbolic, adds emotional depth and aesthetic richness to a room, while wall patterns contribute to structure, rhythm, and continuity in design. Together, they enhance the visual experience, making spaces feel more complete, welcoming, and engaging. In modern interior design, the thoughtful integration of wall art and patterns plays a crucial role in creating environments that are not only beautiful but also meaningful and memorable. As lifestyles evolve, so too does the art of wall decor—encouraging innovation, sustainability, and personalized expression within our living and working spaces.
Search Matching Applicants in Odoo 18 - Odoo SlidesCeline George
The "Search Matching Applicants" feature in Odoo 18 is a powerful tool that helps recruiters find the most suitable candidates for job openings based on their qualifications and experience.
2. • A thread library provides the programmer with an API for
creating and managing threads.
• There are two primary ways of implementing a thread library.
• First approach
– is to provide a library entirely in user space with no kernel
support.
– All code and data structures for the library exist in user
space which means that invoking a function in the library
results in a local function call in user space and not a
system call.
• Second approach
– is to implement a kernel-level library supported directly by
the operating system.
– In this case, code and data structures for the library exist in
kernel space.
3. • Three main thread libraries are in use today:
POSIX Pthreads
Windows Threads
Java Threads
Two general strategies for creating multiple threads:
Asynchronous threading
Synchronous threading
In asynchronous threading, once the parent creates a
child thread, the parent resumes its execution, so that the
parent and child execute concurrently.
Each thread runs independently of every other thread,
and the parent thread need not know when its child
terminates.
Because the threads are independent, there is typically
little data sharing between threads.
4. • Synchronous threading occurs when the parent thread
creates one or more children and then must wait for all of
its children to terminate before it resumes —the so-called
fork-join strategy.
• Once each thread has finished its work, it terminates and
joins with its parent. Only after all of the children have
joined , the parent resume execution.
• Synchronous threading involves significant data sharing
among threads.
• For example, the parent thread may combine the results
calculated by its various children.
5. • The C program demonstrates the basic
Pthreads API for constructing a multithreaded
program that calculates the summation of a
non-negative integer in a separate thread.
• If N is 5, this function would represent the
summation of integers from 0 to 5, which is 15
7. #include <pthread.h>
#include <stdio.h> int sum; /* This data is shared by the thread(s) */
void *runner(void *param); /* Threads call this function */
int main(int argc, char *argv[])
{
pthread_t tid; /* The thread identifier */
pthread_attr_t attr; /* Set of thread attributes */
if (argc != 2)
{
fprintf(stderr, "usage: %s <integer value>n", argv[0]); return -1;
}
if (atoi(argv[1]) < 0)
{
fprintf(stderr, "%d must be >= 0n", atoi(argv[1]));
return -1;
} /* Get the default attributes */
pthread_attr_init(&attr);
/* Create the thread */
pthread_create(&tid, &attr, runner, argv[1]);
/* Wait for the thread to exit */
pthread_join(tid, NULL);
printf("sum = %dn", sum);
return 0;
} /* The thread will begin control in this function */
void *runner(void *param)
{
int i, upper = atoi(param);
sum = 0;
for (i = 1; i <= upper; i++)
sum += i;
pthread_exit(0);
}
8. • All Pthreads programs must include the pthread.h header file
• When creating a new thread, we typically specify a function
that the thread will run. This function is sometimes referred
to as the "runner function" or "thread function.“
• When this program begins, a single thread of control begins
in main().
• After some initialization, main() creates a second thread that
begins control in the runner() function.
• Both threads share the global data sum.
• pthread_t tid : Declares a variable tid of type pthread_t,
which is used to identify a thread.
• pthread_attr_t attr : Declares a variable attr of type
pthread_attr_t, which is used to store thread attributes.
9. if (argc != 2)
{
fprintf(stderr, "usage: %s <integer value>n", argv[0]);
return -1;
}
if (atoi(argv[1]) < 0)
{
fprintf(stderr, "%d must be >= 0n", atoi(argv[1]));
return -1;
}
this code block ensures that the program is provided with the correct
number of command-line arguments which are non-negative integer.
If these conditions are not met, the program prints an error message
and exits with a return code of -1, signaling an error state.
10. • A separate thread is created with the pthread
create() function call
• In addition to passing the thread identifier and
the attributes for the thread, we also pass the
name of the function where the new thread will
begin execution—in this case, the runner()
function.
• Last, we pass the integer parameter that was
provided on the command line, argv[1].
• At this point, the program has two threads:
– the initial (or parent) thread in main()
– the summation (or child) thread performing the
summation operation in the runner() function.
11. • pthread_join() Parent thread waits for the created
thread to finish before proceeding with the main
thread.
• printf("sum = %dn", sum): Prints the calculated sum
after the thread has finished.
• The summation thread will terminate when it calls
the function pthread exit().
• Once the summation thread has returned, the parent
thread will output the value of the shared data sum.
12. • Parent Thread (Main Thread):
– Initialize Attributes: pthread_attr_init(&attr);
– Create Thread: pthread_create(&tid, &attr,
runner, argv[1]);
– Wait for Thread to Finish: pthread_join(tid, NULL);
– Print Result: printf("sum = %dn", sum);
• Child Thread (runner function):
– Calculate Sum: for (i = 1; i <= upper; i++) sum += i;
– Exit Thread: pthread_exit(0);
13. A simple method for waiting on several threads using the
pthread join() function is to enclose the operation within a
simple for loop.
14. Windows Threads
• The technique for creating threads using the
Windows thread library is similar to the
Pthreads technique in several ways.
16. • The program defines a global variable Sum to
store the sum of integers calculated by the
thread(s).
• The Summation function is the entry point for
the thread.
• It takes a parameter Param (a pointer to an
integer) and calculates the sum of integers
from 1 to the value pointed to by Param.
17. DWORD WINAPI Summation(LPVOID Param)
/* LPVOID is a generic pointer that indicates that the
pointer can point to data of any type */
{
DWORD Upper = *(DWORD*)Param;
/*retrieves that value from the memory location
pointed to by Param and storing it in the variable Upper
*/
for (DWORD i = 0; i <= Upper; i++)
Sum += i;
return 0;
}
18. int main(int argc, char *argv[])
{
DWORD ThreadId;
HANDLE ThreadHandle; declares a variable named ThreadHandle of type HANDLE
int Param;
if (argc != 2)
{
fprintf(stderr,"An integer parameter is requiredn");
return -1;
}
Param = atoi(argv[1]);
if (Param < 0)
{
fprintf(stderr,"An integer >= 0 is requiredn");
return -1;
}
this code block ensures that the program is provided with the correct number of
command-line arguments which are non-negative integer. If these conditions are not
met, the program prints an error message and exits with a return code of -1, signaling
an error state.
20. • ThreadHandle = CreateThread(...) - creates a
new thread using the CreateThread function
with the parameters passed.
• if (ThreadHandle != NULL) { ... }: This condition
checks if the thread creation was successful by
verifying that ThreadHandle is not NULL. If the
thread was successfully created, the code
inside the if block is executed.
21. • WaitForSingleObject(ThreadHandle,
INFINITE): waits for the created thread to
finish its execution. The WaitForSingleObject
function suspends the execution of the calling
thread until the specified thread (in this case,
ThreadHandle) terminates.
• INFINITE- wait indefinitely until the thread
finishes.
• CloseHandle(ThreadHandle);: After the
thread has finished, this line closes the handle
to the thread.
22. • This code creates a thread, waits for it to
finish, closes the thread handle to free up
resources, and then prints the calculated sum
to the console.
23. Java Threads
• All Java programs comprise at least a single thread
of control—even a simple Java program consisting
of only a main() method runs as a single thread in
the JVM.
• There are two techniques for creating threads in a
Java program.
• One approach is to create a new class that is
derived from the Thread class and to override its
run() method.
• Another approach—and more commonly used—
technique is to define a class that implements the
Runnable interface.
24. • The Runnable interface is defined as follows:
public interface Runnable
{
public abstract void run();
}
• When a class implements Runnable, it must
define a run() method.
• The code implementing the run() method is
what runs as a separate thread
27. – private int sum;: declares a private integer variable named sum.
Private keyword indicates that this variable can only be accessed
within the same class (Sum), and not from outside.
• Getsum Method:
public int getSum()
{
return sum;
}
This method is a getter method, which provides access to the
private sum variable.
It allows other classes to retrieve the value of sum without
directly accessing the variable.
The public keyword means that this method can be called from
outside the class.
28. • getSum() is used to retrieve the value of the private
sum variable, and the value can be modified using a
corresponding setSum() method.
• public void setSum(int sum) declares a method
named setSum that takes an integer parameter
named sum.
• The method has a void return type, indicating that it
doesn't return any value.
• this.sum = sum assigns the value of the parameter
sum to the private variable sum of the class.
• The use of this.sum is necessary to distinguish
between the class's private variable sum and the
method's parameter sum.
29. class Summation implements Runnable
The Summation class implements the Runnable
interface, which means instances of this class
can be executed by a separate thread.
private int upper represents the upper limit for
the summation.
It is a private instance variable, meaning it can
only be accessed within the Summation class.
private Sum sumValue variable is of type Sum
and represents an instance of the Sum class. It is
also a private instance variable.
30. • When the summation program runs, the JVM
creates two threads. The first is the parent
thread, which starts execution in the main()
method.
• The second thread is created when the start()
method on the Thread object is invoked. This
child thread begins execution in the run()
method of the Summation class.
• After outputting the value of the summation,
this thread terminates when it exits from its
run() method.