Concurrency is simply doing multiple things at the same time. Just like you are doing different stuff in different tabs while you’re also reading this article.
Each line is executed one by one. Synchronous code is also called blocking. This is because line 2 can’t be executed until line 1 is not done executing. You might not realize the “blocking” in this example because operations are usually quick.
In the above example, line 2 involves a network call that takes two minutes to complete. While this call is happening, you can’t really go to line 3, even if the computing capacity of your machine has the capability to compute at that time.
Before we discuss concurrency in detail, let us try to define some fundamental concepts that are closely associated.
Non-Blocking: To put it simply, non-blocking means that the execution of the program will not be blocked because of some independent code that takes unusually long to execute. If line one has a slow operation, it should not block line two, which is independent of line 1, from executing.
Note: The concept of non-blocking applies only when two operations are independent of each other. Any succeeding operation which has a dependency on the result of the previous operation should always wait.
Concurrent: When multiple operations happen simultaneously, they are called to be concurrent.
Asynchronous: Asynchronous means that the code can execute immediately. This is unlike the synchronous code, which blocks the execution of the next lines of code until the current line is in execution. We will dive deeper into the lines to come.
The execution context of the code currently being executed will be on top of the stack. Once it is done executing, the execution context is popped out of the stack. This data structure is famously known as the call stack.
Let’s understand with an example:
Visualisation To Understand The Above Example
Here’s what shall happen during the execution of the above code.
- A global execution context is created and immediately put on the call stack which was earlier empty. Once the context is in the call stack the JS engine begins executing the code.
- When the control starts with line 4, it logs “Communication Started” on the console.
- On line 5, the control finds a function invocation that creates a local execution context, immediately the execution context of sayHello is put on top of the call stack.
- JS Engine will only execute the code within the context of the top element of the stack, and hence the code inside sayHello begins executing and “Hello Harsh” is printed on the console.
- Since there is no code to further execute, The context of sayHello is popped out of the stack, and the control moves to the next line.
- The control finds another function invocation, and the process from 3 to 5 happens for the communication function. As a result, “This is how code is executed in JS” is printed on the console.
- When control reaches the next line, a similar process follows, and “Bye Harsh” is printed on the console.
- Now, the control is back with the global execution context, which was only half done executing. The control starts executing from where it left off and “Communication Over” is logged
- Since there is no code left in the global execution context, that is always popped out of the call stack and call stack is empty.
But Where is Concurrency?
Back to an Example:
What do you think will be out of the above lines of code?
- Whenever setTimeOut is called, the call stack now contains setTimeOut and waits for the delay to happen before moving on to the next line
- When setTimeout is called, the function call goes to the top of the call stack but is immediately popped. The next line gets executed immediately (without waiting for the delay) When the code begins executing, the first thing that is logged on the console is “Start”. This is pretty obvious.
When the control comes to the second line, setTimeout is called, and a callback function is registered for future execution.
The control does not wait for the delay to happen. Essentially delay does not mean “a delay in the program” rather, it means “a delay in execution of the callback function”. Both are very different things.
Note: Not all callbacks have same priority. Some callbacks use a special queue known as the micro task queue/Job queue, which has higher priority order than the callback queue. It introduces more interesting concepts like starvation in a queue.
Here’s a more practical example:
Assume that the asyncNetworkCall is implemented as an asynchronous function. This means while the network call is still in function, the control is able to move to line 3 and log something to the console.
But How does it work?
Let’s begin by stating the universal truth - “A call stack will have to be empty for the event loop to bring anything in it”.
Whenever the call stack is empty, the first entered unit in the callback queue is brought to the call stack by the event loop which was observing both the call stack and callback queue.
Did You Notice?
setTimeout of an ‘n’ seconds does not guarantee execution exactly when ‘n’ seconds have passed, rather it guarantees a minimum delay of ‘n’. This is because after a defined limit, the callback function moves to the callback queue (stating ready for execution) where the event loop is supposed to pick it for actual execution.
- JS is single threaded. It has only one call stack.
- For any event from the callback queue/job queue to come to call stack, the call stack will have to be empty.