The post Quicksort algorithm – Lomuto partition scheme appeared first on Learn To Code Together.

]]>In this article, I want to introduce to you one of the most efficient sorting algorithms of all time, far better compared with these algorithms above, named Quicksort. This sorting algorithm was invented by Tony Hoare in 1959 (which was a really long time ago but still widely used today!). At the end of this article, I hope you can understand what is the quicksort algorithm, how it could be implemented, its time and space complexity.

Quicksort works based on the “divide and conquer” paradigm which means quicksort is a recursive algorithm. A divide and conquer is a paradigm in which the problem is divided into multiple subproblems with the same type, each subproblem is then solved independently, and finally, we combine the sub solutions of all subproblems together to solve the original big problem.

Quicksort applies the divide and conquer paradigm as follow:

- First, pick an element, called
**pivot**from the array. - Partitioning by dividing the array into two smaller arrays. Rearrange elements so which all elements which are
**less than**the**pivot**will be moved to the**left array**, all elements**greater than**the**pivot**will be moved to the**right****array**. - Then
**recursively**apply these 2 steps in smaller sub-arrays until all of them are sorted.

There are various different implementations for the Quicksort algorithm, here we examine the pseudocode from one of them, which is used to sort elements from `low`

to `high`

of an array `A`

:

```
/* Lomuto partition scheme for quicksort algorithm */
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p - 1)
quicksort(A, p + 1, hi)
algorithm partition(A, lo, hi) is
pivot := A[hi]
i := lo
for j := lo to hi do
if A[j] < pivot then
swap A[i] with A[j]
i := i + 1
swap A[i] with A[hi]
return i
```

Let’s make things more clear, for example, we have the array `[2, 9, 7, 6, 4, 3, 8, 5]`

. **The first step** is we need to pick a pivot element, for the sake of simplicity, we pick the last element as the pivot, so our pivot here is `5`

. Then **the second step** is we move all elements that are less than 5 to the left and all the greater to the right, after partitioning, our array might look like: `[2, 4, 3,`

, elements are placed in the correct positions with respect to the pivot, however, as we can easily notice, the order of elements on the left and right intervals are not correct. In ** 5**, 9, 7, 8, 6]**the third step** we recursively apply these two steps to these sub-arrays, now we have the left sub-array`[2, 4, 3]`

, then we pick 3 as the pivot, then reorder the subarray, now it will become `[2, 3, 4]`

, then on the right subarray `[9, 7, 8, 6]`

6 is picked the pivot, the right subarray then become `[6, 7, 8, 9]`

.

Here is how our array being changed after the first call to the `partition`

function:

After the first call to the `partition`

function is settled, the pivot is now placed correctly and the position of the pivot is obtained. Then, we can further call the `quicksort`

function on the left and right of the pivot.

Now the left subarray has 3 elements `[2, 4, 3]`

and 3 is picked as a pivot, after partitioning, this subarray will then become `[2, 3, 4]`

and considered sorted. The index for the pivot now is 1, by this time, no more recursive calls on the smaller arrays of this subarray will be made because we have reached the base case, the condition `(lo < hi)`

in the pseudocode is no longer true (`quicksort(arr, 0, 0)`

, and `quicksort(arr, 2, 2)`

).

The right subarray is totally sorted `[6, 7, 8, 9]`

, after moving the pivot to the precise position, we get 4 as the index to further divide this array to more sub-arrays (notice that the index here is the index of this element on the original array). Now we then call `quicksort`

on its left, `quicksort(A, 4, 3)`

, no more recursive call is made on the sub-left, on the right, we again apply `quicksort(arr, 5, 7)`

, the recursive process is then continued again until the base condition `lo >= hi`

is met.

Let’s implement the Quicksort algorithm in Java using Lomuto partition scheme:

```
public class QuickSort {
public static void quickSort(int[] arr, int low, int high) {
if (low < high) {
int partition = partition(arr, low, high); // get the correct position of the pivot
quickSort(arr, low, partition - 1); // call quickSort on the sub-left array
quickSort(arr, partition + 1, high); // call quickSort on the sub-right array
}
}
public static int partition(int[] arr, int low, int high) {
int pivot = arr[high]; // pick the last element as the pivot
int i = low; // initially set i to low
for(int j = low; j < high; j++) { // loop through all array elements
if (arr[j] < pivot) {
swap(arr, i, j); // move elements less than pivot to the correct order by swapping arr[i] with arr[j]
i++; // increase i to the next array position
}
}
swap(arr, i, high); // swap the pivot to the correct position
return i; // return the index of the pivot
}
public static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
```

Let’s create some tests:

```
public static void main(String[] args) {
int []arr = new int[10];
for(int i = 0; i < 10; i++) {
arr[i] = new Random().nextInt(100);
}
System.out.println("Before sorting: ");
System.out.println(Arrays.toString(arr));
quickSort(arr, 0, arr.length - 1);
System.out.println("After sorting: ");
System.out.println(Arrays.toString(arr));
}
```

Output:

```
Before sorting:
[73, 49, 87, 74, 51, 40, 11, 42, 85, 98]
After sorting:
[11, 40, 42, 49, 51, 73, 74, 85, 87, 98]
```

Quicksort is an unstable sorting algorithm, meaning the order of the same elements is not preserved before and after performing sorting. There are a number of different implementations for Quicksort, what we’ve learned is one of them. One of the popular implementations for Quicksort is Hoare partition scheme, in which it uses two indices that start at the ends of the array being partitioned, each index moving forward each other, elements with wrong orders will be swapped until they detect an inversion.

In the ideal case, each time we perform partition by divide the array into 2 nearly equal sub pieces, this means each recursive call processes a list of half the size, hence, we only need calls before we reach the list of size 1, meaning the depth of the recursive tree is , each level of call only needs time altogether. Thus, the best case complexity of this algorithm is . The time complexity for the average case is also .

However, if the quicksort is implemented using the Lomuto partition scheme, the worst case of this algorithm is degraded to time. The most unbalanced partition occurs when one of the sublists returned by the partitioning routine is of size , this might happen when the pivot is the smallest or the largest element on the array. When this happens on every partition, each recursive call will call on the list with the size less than one of the previous list, then the recursive tree will have a linear chain of nested calls. For the first call, there will be time on this call, the next will have the time complexity of , etc…For the recursive call, the time required for this call is , and . Thus, Quicksort takes in the worst case.

Quicksort is an in-place sorting algorithm, meaning no auxiliary data structure is needed to complete its function, when carefully implemented, the space complexity of Quicksort is even in the worst case, each partition only takes a constant space of and there will be for all recursive calls.

However, in the naive approach, the space required for this algorithm is .

To learn more about Quicksort and more precise and rigorous mathematical proofs, check out some of these references:

The post Quicksort algorithm – Lomuto partition scheme appeared first on Learn To Code Together.

]]>The post 905. Sort Array By Parity – In-place Solution | LeetCode Problem appeared first on Learn To Code Together.

]]>`A`

of non-negative integers, return an array consisting of all the even elements of `A`

, followed by all the odd elements of `A`

.
You may return any answer array that satisfies this condition.

**Example 1:**

Input:[3,1,2,4]Output:[2,4,3,1] The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted.

**Note:**

`1 <= A.length <= 5000`

`0 <= A[i] <= 5000`

The template code which is already provided on LeetCode:

```
class Solution {
public int[] sortArrayByParity(int[] A) {
````}`

}

We are going to tackle this problem by 2 different approaches in this article, the first one requires an auxiliary to solve the problem, the second one we try a different way and solve the problem in-place using Java.

In Java, the first approach we can come up with is to sort the array based on parity, then getting the array after sorting then we’re done. We can use the `sort`

function of the `Arrays`

class for this purpose, however, the `sort`

function takes an array of generic type `T[]`

to be sorted, and the `Comparator`

which we pass in the `sort`

function to perform custom sort also needs an argument type T. Hence, before sorting, we must create an auxiliary array with the type of `Integer[]`

and copy all elements on the provided array to this array:

Integer [] nums = new Integer[A.length]; for(int i = 0; i < A.length; i++) { nums[i] = A[i]; }

Then now we fulfill the custom sorting function in the array we’ve just created:

Arrays.sort(nums, (a, b) -> Integer.compare(a % 2, b % 2));

By doing this custom sort, the even element is always placed before the odd one. We can also utilize the `compareInt`

method:

Arrays.sort(nums, Comparator.comparingInt(a -> a % 2));

After we have the array that satisfies the problem, we then need to copy its elements back to the original array because here the `int[]`

needs returning:

for(int i = 0; i < nums.length; i++) { A[i] = nums[i]; } return A;

The complete code for the first approach:

public int[] sortArrayByParity(int[] A) { Integer[] nums =newInteger[A.length];for(inti = 0; i < A.length; i++) nums[i] = A[i]; Arrays.sort(nums, (a, b) -> Integer.compare(a % 2, b % 2));for(inti = 0; i < A.length; i++) A[i] = nums[i];returnA; }

**Time complexity: O(n) – where n is the size of the arraySpace complexity: O(n) – where n is the size of the original array.**

In the second approach, we will try to find a way to modify the array in-place, which means no auxiliary data structure is needed for solving this problem.

To do that, we create 2 pointers, one starts at the beginning moving forward (named `i`

) and one moves backward from the end (named `j`

). The invariant here is everything precedes `i`

must be even, and everything behinds `j`

should be odd. There are 4 cases that `(A[i] % 2, A[j] % 2)`

has:

- If it is
`(0, 1)`

then we know that`A[i]`

is even and`A[j]`

is odd, they are correct order then we move on:`i++, j--`

. - If it is
`(1, 0)`

then we know they are in the reverse order, we simply swap them and move on. - If it is
`(0, 0)`

then only`A[i]`

is correct, then`i++`

. - If it is
`(1, 1)`

then only`A[j]`

is correct, then we move to the next`j`

position,`j--`

.

We continue this process while `i`

is less than `j`

. When they are equal, we know we go through all elements and then we can stop:

public int[] sortArrayByParity(int[] A) {inti = 0, j = A.length - 1; // 2 pointers while (i < j) {if(A[i] % 2 > A[j] % 2) { // if both a[i] and a[j] are in the incorrect positions, then swap int tmp = A[i]; A[i] = A[j]; A[j] = tmp; }if(A[i] % 2 == 0) i++; // if a[i] is in correct position, then move the next one if (A[j] % 2 == 1) j--;// if a[j] is in the correct position, the move to the next one.}returnA;// array after sorted}

**Time Complexity: O(n) – **where n is the size of the array.**Space Complexity: O(1) – **we haven’t had to create any auxiliary data structure and just return the original array after being modified.

The post 905. Sort Array By Parity – In-place Solution | LeetCode Problem appeared first on Learn To Code Together.

]]>The post Understand Tree Traversal: Pre-order, In-order, and Post-order Traversal appeared first on Learn To Code Together.

]]>Because of this, there are not just once, but several techniques can be applied to traverse the tree. The most ubiquitous ones are **pre-order traversal**, **in-order traversal**, **post-order traversal** and **level-order traversal** (Breadth First Traversal).

Firstly, for better envision, let’s draw some examples real quick:

In this article, we’re going to dive deep into **In-order**, **Pre-order**, **Post-order** traversals in Binary Search Tree and understand how they work in a recursive manner.

But before we go, if you already know about recursion, how it works, what is the call stack and activation record then it’s a great help.

**What do I mean by traversing the tree?**

Tree traversal is a process of **exploring** all the nodes in the tree only **once**. You can either implement in-order traversal iteratively or recursively. However, we’re interested in doing this with recursion.

With in-order traversal, the left child is explored first, next visiting the root and then visit the right child. With that in mind, we are gonna implement a recursive procedure with the following steps:

- Recursively visit the left subtree
- Visit the node
- Recursively visit the right subtree

What do I mean by “recursively visit” left or right subtree and why it does not “visit” those positions? Let’s see another example:

The node with the value of 24 is the root of the tree, but it has 4 children on the left, which are 14, 12, 17 and 8. Which is it going to be visited first? The answer is the last one. Initially, it gets to the root node 24, then it recursively exploring the left, here in the left child 14, but here the recursion is again applied, it continues to go to the left node of 14, which is 12, then here in 12, it again traverses the left part, finds the value of 8, but here it no longer can explore deeper, then the algorithm stops, prints the value of 8, then go back for the previous call of recursion which is not yet done, print the Node of 12, then it goes back to print the Node 12, then here in 12, it exerts to attain the right child of this node but there is nothing, then it backtracks to the node 14, this procedure is continued again and again until there is nothing left.

During an **inorder traversal**, we visit a position between the recursive traversal of its left and right subtrees. The inorder traversal of a binary tree *T *can be informally viewed as visiting the nodes of *T *“**from left to right.**”

For now, let’s meticulously finalized the in-order traversal method with the recursive approach:

- Create a method named

. The recursive function keeps doing while the**inOrder(Node node)**

is not null**node** - Recursively call

in its left child**inOrder(node.left)** - Perform the “visit” action for the position
**node** - Recursively call

in its right child**inOrder(node.right)**

```
private void inOrder(Node node) {
if (node != null) {
/** recursively calls inOrder on the left sub-tree */
inOrder(node.left);
/** prints the value at the current node */
System.out.println("The current node value is " + node.element);
/** recursively calls inOrder method for the right subtree. */
inOrder(node.right);
}
}
```

Let’s recall the example I have given at the beginning of this section, put it to this method and see what happens:

```
24
/ \
14 26
/ \ / \
12 17 25 32
/
8
```

**Output:**

`8, 12, 14, 17, 24, 25, 26, 32`

Here we mark

as a private method and we gonna use a method that accepts no parameters calls **inOrder**

method for public use. By doing so, the work of in-order algorithms is done internally and we don’t have to worry about the node we pass in when going outside the scope of this method:**inOrder**

```
public void inOrderTraversal() {
inOrder(root);
}
```

The

variable we pass here in the **root**

is an instance variable that is already defined in the class which contains those 2 methods. The complete code of this class is now shown below:**inOrderTraversalPublic**

```
class BST<T extends Comparable<T>> {
private Node root = null;
private int nodeCount = 0;
private class Node {
T element;
Node left, right;
public Node(Node left, Node right, T element) {
this.left = left;
this.right = right;
this.element = element;
}
}
/** Utility method for adding element to the BST. */
private Node insert(Node node, T value) {
if (node == null) {
node = new Node(null, null, value);
}
else if (value.compareTo(node.element) < 0) {
node.left = insert(node.left, value);
}
else if (value.compareTo(node.element) > 0) {
node.right = insert(node.right, value);
}
else if (value.compareTo(node.element) == 0) {
return null;
}
return node;
}
public void add(T value) {
root = insert(root, value);
nodeCount++;
}
/** Our work is here, the in-order traversal method in BST. */
private void inOrder(Node node) {
if (node != null) {
inOrder(node.left);
System.out.println(node.element);
inOrder(node.right);
}
}
/** method with no parameter */
public void inOrderTraversal() {
inOrder(root);
}
/** Test in-order traversal */
public static void main(String[] args){
BST<Integer> tree = new BST<>();
tree.add(24);
tree.add(14);
tree.add(12);
tree.add(8);
tree.add(17);
tree.add(26);
tree.add(25);
tree.add(32);
tree.inOrderTraversal();
/** Output: 8, 12, 14, 17, 24, 25, 26, 32 */
}
}
```

As we implement the in-order traversal algorithm in a Binary Search Tree, hence the interesting property which is easy to notice that the output is sorted in the **ascending order**. Besides this, the in-order traversal algorithm can be used in binary trees to represent arithmetic expressions.

For now, if we already have an idea about how the in-order traversal works in the binary tree, we can flexibly adapt to another type of traversal, which is the pre-order traversal, in the pre-order traversal, we explore the tree as follow:

- Visit the node
- Recursively explore the left subtree
- Recursively explore the right subtree

If we employ the pre-order traversal to this example above, here is how the tree is sequentially visited:

`7 -> 3 -> 2 -> 6 -> 9 -> 8 -> 14`

Here we just change the order of the visit, in this traversal, the root of the tree always is visited first before any recursion, here is our code for this implementation:

```
private void preOrder(Node node) {
if(node != null){
System.out.println("Currently visit the node: " + node.element); // visit the current root node
preOrder(node.left); // recursively visits the left subtree
preOrder(node.right); // recursively visits the right subtree
}
}
```

For simplifying the usage, let’s define a public `preOrderTraversal()`

method which accepts no argument:

```
public void preOrderTraversal() {
preOrder(root);
}
```

In this public method, we just simply call the `preOrder`

function and pass in the root which is an instance variable we already defined in the class. Now, let’s test this program:

```
public static void main(String[] args){
BST<Integer> tree = new BST<>();
tree.add(7);
tree.add(3);
tree.add(9);
tree.add(2);
tree.add(6);
tree.add(8);
tree.add(14);
tree.preOrderTraversal();
}
```

Output:

```
Currently visit the node: 7
Currently visit the node: 3
Currently visit the node: 2
Currently visit the node: 6
Currently visit the node: 9
Currently visit the node: 8
Currently visit the node: 14
```

The complete code of the pre-order traversal can be found here.

Pre-order traversal can have many applications, it is topologically sorted, which means the parent is always processed before any of its child nodes. This property can be applied for scheduling a sequence of jobs or tasks.

Last, we examine the post-order traversal. Again, we just have to change the order of the visit, this technique can be viewed as follow:

- Recursively visit the left subtree
- Recursively visit the right subtree
- Visit the node

In contrast to pre-order traversal, the root of the tree always is visited last after recursively visit the left and the right subtrees. If we take the image above as an example, then the order will as follow:

`2 -> 3 -> 4 -> 7 -> 12 -> 9 -> 6 -> 5`

```
private void postOrder(Node node) {
if (node != null) {
postOrder(node.left);
postOrder(node.right);
System.out.println("Currently visit the node:
" + node.element);
}
}
```

And afterward, we define a public method named `postOrderTraversal`

:

```
public void postOrderTraversal(){
postOrder(root);
}
```

```
public static void main(String[] args) {
BST<Integer> bst = new BST<>();
bst.add(5);
bst.add(4);
bst.add(3);
bst.add(2);
bst.add(6);
bst.add(9);
bst.add(7);
bst.add(12);
bst.postOrderTraversal();
}
```

Output:

```
Currently visit the node: 2
Currently visit the node: 3
Currently visit the node: 4
Currently visit the node: 7
Currently visit the node: 12
Currently visit the node: 9
Currently visit the node: 6
Currently visit the node: 5
```

The post-order traversal has several applications, it can be used to deleting or freeing nodes and values for an entire binary tree. The post-order traversal can also generate a postfix representation of a binary tree.

In this article, we have learned about 3 types of traversal in a binary tree, which are pre-order, in-order, and post-order traversals. All 3 of them can be implemented by using recursion and their common property is all visiting the child nodes of the sub-tree first before visiting the siblings, hence they are also called **Depth-First Traversal**.

Pre-order, In-order or Post-order imply how the tree will be visited with respect to the root node. For the pre-order, the root will be visited first after any recursion. With the in-order, the root always is visited after the left subtree and before the right subtree. For post-order traversal, the root is visited last in contrast with pre-order. Each one has its own perks and can be applied in many applications.

The time complexity of those 3 traversal algorithms is . Fully implementation and explanation of BST can be found here, the code for tree traversals covered in this article can be found here.

The post Understand Tree Traversal: Pre-order, In-order, and Post-order Traversal appeared first on Learn To Code Together.

]]>The post Binary Search Tree (BST) Concrete Implementation In Java appeared first on Learn To Code Together.

]]>**Firstly, what is a binary tree?**

The binary tree is an ordered tree data structure that satisfies those properties:

- Every node has at most 2 children.
- Each child node is labeled either the left child or the right child.
- A left child precedes a right child in the order of children with respect to a node.

A Binary Tree Not a Binary Tree Binary Tree

Binary Search Tree is a **node-based** **binary tree** data structure which has the following properties:

- The Node’s value of the left child is always lesser than the value of its parent.
- The Node’s value of the right child is always greater than the value of its parent.
- The left and the right child must also be a BST and there is no duplicate node.

A Binary Search Tree Also a Binary Search Tree Not a Binary Search Tree

To implement a binary search tree, we first have to reckon how it might look like. Initially, we discern that in a BTS, each node contains its own value and at most 2 children. Hence, we start making a BST by creating a class named **Node**, which holds the node’s data and the references to its children, and for better encapsulation, here we are going use this class as a nested class:

```
public class BST <T extends Comparable<T>>{
private class Node{
T element;
Node left, right;
public Node(Node left, Node right, T element){
this.left = left;
this.right = right;
this.element = element;
}
}
}
```

Let’s break down what we have inside this

class. We store our node’s value in a variable named **Node**

, each node has 2 references to its left and right child which we store in **element**

and **left**

, respectively. Inside the constructor, we just simply create a typical constructor that assigns the value that we pass in into our instance variables. Notice that here we use the generic type, which means we don’t strict the value that our **right**

variable can have, it could be anything that extends from the Comparable interface. If instead, we use **element**

or **int element**

, our class can either work with integer or String type at once. **String element**

Once we have the

class on our hand, we move outside and make some addition to work with the tree.**Node**

```
public class BST <T extends Comparable<T>>{
private Node root = null;
private int nodeCount = 0;
// Node class suppressed for brevity
public boolean isEmpty() {
return nodeCount == 0;
}
}
```

Here we create 2 variables, one for starting our tree named

and the other one **root**

is for counting the number of the node in a tree. A new method **nodeCount**

has appeared to check whether the tree is empty.**isEmpty()**

**add**(value): adds a value to the tree.**display**(): shows all the current node’s values of a tree.**find**(node, value): finds whether the tree contains a certain node.**findMin**(node): finds the smallest value in the tree.**findMax**(node): finds the largest value in the tree.

The first thing after we have done with initial steps is finding a way to add a node to the tree. To add a new node to the tree without breaking BST invariant, we need to keep in mind a few things:

- If the node’s value we are going to insert is
**less**than the current node’s value, go to the**left child**of the current node. - If the node’s value we are going to insert is
**greater**than the current node’s value, go to the**right child**of the current node. - When the current node is
**null**, which means we have reached to the**leaf**, then**insert**a new node to this position.

We here insert a new node by using recursion, but you also can use iterative style if you want, and remember, the new node is always added to the leaf of the tree.

```
private Node insert(Node node, T value) {
/** We found the leaf node here, then insert the value to this position. */
if (node == null) {
node = new Node(null, null, value);
}
/**
* If the value we pass in is less than the current node's value, then we go to the left.
*/
else if (value.compareTo(node.element) < 0) {
node.left = insert(node.left, value);
}
/**
* If the value we pass in is greater than the current node's value, then we go to the right.
*/
else if (value.compareTo(node.element) > 0) {
node.right = insert(node.right, value);
}
/** If the value we pass in already exists in the tree, we return null. */
else if (value.compareTo(node.element) == 0) {
return null;
}
/** Finally return the node after finishing insertion */
return node;
}
```

In the code fragment above, we finally return

. But is it a node we just insert or the whole tree starting from the root? The answer is it returns a binary search tree starting from the root after adding a new value to the correct position.**node**

The

method here is labeled as private, hence it intentionally just handle the work inside the BST class. Here we have to create a public method named **insert**()

:**add**()

```
public void add(T value){
root = insert(root, value); // inserts value to the correct position by calling the insert method
nodeCount++; // increases the size of the tree
}
```

The time complexity of

method is . Because each time we traverse the tree, the number of the rest nodes we examine will be cut by a half. However, the worst case is , but we will talk more about it in upcoming articles on this series.**insert**()

Next, we are going to create a method that traverses the tree and display the node’s value in the ascending order.

```
public void display(Node node){
if(node == null){
return;
}
/** recursively calls display method on the left sub-tree */
display(node.left);
/** Visit the current node */
System.out.println("The current node value is " + node.element);
/** recursively calls display method for the right subtree. */
display(node.right);
}
```

Basically, this method will help us print the values of the tree in the ascending order. The left node will be recursively visited first, then we print the value of the root, and then we recursively traverse the right subtree.

For example in the graph above, we first start with the root node **F**, because this node is not null, then we call the recursive function on its left child, now we have **display(node.left)** <=> **display(F.left)**, which is **B**, then it continues until reaches null, and **(B.left)** stop at **A** and print this node’s value. After prints value at node **A**, it moves back to node **B** because it was previously called but we haven’t done with it yet, prints the node value, and move to the right, e.g **(B.right) **and go the node **D**, continues to go to node **C**, stop prints value at the node **C** and repeat the identical steps as we did before.

The process of traversing the tree and display the value in ascending order is called **In-order traversal**. However, we will have another intensive article to talk about this and some other tree traversal methods. The time complexity of this method is

Let’s make some test real quick:

```
public static void main(String[] args){
BST binarySearhTree = new BST();
binarySearhTree.add(12);
binarySearhTree.add(4);
binarySearhTree.add(13);
binarySearhTree.add(7);
binarySearhTree.add(28);
binarySearhTree.add(72);
binarySearhTree.add(1);
binarySearhTree.display(binarySearhTree.root);
}
```

Output:

```
The current node value is 1
The current node value is 4
The current node value is 7
The current node value is 12
The current node value is 13
The current node value is 28
The current node value is 72
```

The work we do here is pretty much equivalent to our

method. But instead of inserting a new element, it recursively traverses the tree and finds the value we are looking for, returns either **insert**

or **true**

if appropriate.**false**

```
private boolean contains(Node node, T value){
/** returns false if we cannot find the value in the tree. */
if(node == null) return false;
/** if two values are equal, found it! And returns true. */
if(value.compareTo(node.element) == 0){
return true;
}
/** Otherwise, recursively go back or right to search for the value. */
return value.compareTo(node.element) < 0 ? contains(node.left, value) : contains(node.right, value);
}
public boolean containValue(T value){
return contains(root, value);
}
```

Let’s clarify this thing, the absence of one or more base cases in the recursive method is not acceptable, we firstly create a base case to check if the current node is null, then which indicates there is no value we are searching for out there and returns false. Next, we compare the value with the current node’s value. If it’s less than, go left, if it’s greater than, then go right. The recursive steps run over and over until either we found the value we are searching for in the tree or we cannot find it. The time complexity of this method is

We take a look at two methods to find the smallest and largest values of the tree, correspondingly. We already know that BST is an ordered tree and the nodes are placed in the sorted order. Hence, the smallest node always is in the leftmost and the largest value is in the rightmost position of the tree, which is noticeable. The time complexity of both methods is

```
/** goes with one direction to the left to the smallest value. */
public T smallest(Node node){
while(node.left != null){
node = node.left;
}
return node.element;
}
/** goes to the rightmost position to find the largest one. */
public T largest(Node node){
while(node.right != null){
node = node.right;
}
return node.element;
}
```

Finally, to ring the curtain down, here is the complete code of our Binary Search Tree implementation. If you enjoy reading this post, please consider subscribing to my newsletter to update more articles in this series. This code is also available on Github.

```
public class BST<T extends Comparable<T>> {
private Node root = null;
private int nodeCount = 0;
private class Node {
T element;
Node left, right;
public Node(Node left, Node right, T element) {
this.left = left;
this.right = right;
this.element = element;
}
}
public boolean isEmpty() {
return nodeCount == 0;
}
private Node insert(Node node, T value) {
/** We found the leaf node here, then insert the value to this position. */
if (node == null) {
node = new Node(null, null, value);
}
/**
* If the value we pass in is less than the current node's value, then we go to
* the left.
*/
else if (value.compareTo(node.element) < 0) {
node.left = insert(node.left, value);
}
/**
* If the value we pass in is greater than the current node's value, then we go
* to the right.
*/
else if (value.compareTo(node.element) > 0) {
node.right = insert(node.right, value);
}
/** If the value we pass in already exists in the tree, we return null. */
else if (value.compareTo(node.element) == 0) {
return null;
}
/** Finally return the node after finishing insertion */
return node;
}
public void add(T value) {
root = insert(root, value);
nodeCount++;
}
public void display(Node node){
if(node == null){
return;
}
/** recursively calls display method on the left sub-tree */
display(node.left);
/** Visit the current node */
System.out.println("The current node value is " + node.element);
/** recursively calls display method for the right subtree. */
display(node.right);
}
private boolean contains(Node node, T value) {
/** returns false if we cannot find the value in the tree. */
if (node == null)
return false;
/** if two values are equal, found it! And returns true. */
if (value.compareTo(node.element) == 0) {
return true;
}
/** Otherwise, recursively go back or right to search for the value. */
return value.compareTo(node.element) < 0 ? contains(node.left, value) : contains(node.right, value);
}
public boolean containValue(T value) {
return contains(root, value);
}
/** goes with one direction to the left to the smallest value. */
public T smallest(Node node) {
while (node.left != null) {
node = node.left;
}
return node.element;
}
/** goes to the right most to find the largest one. */
public T largest(Node node) {
while (node.right != null) {
node = node.right;
}
return node.element;
}
public static void main(String[] args) {
BST binarySearhTree = new BST();
binarySearhTree.add(12);
binarySearhTree.add(4);
binarySearhTree.add(13);
binarySearhTree.add(7);
binarySearhTree.add(28);
binarySearhTree.add(72);
binarySearhTree.add(1);
System.out.println("Is 12 in the tree? " + binarySearhTree.containValue(12));
System.out.println("Is 47 in the tree? " + binarySearhTree.containValue(47));
binarySearhTree.display(binarySearhTree.root);
System.out.println("The smallest value in the tree is: " + binarySearhTree.smallest(binarySearhTree.root));
System.out.println("The largest value in the tree is: " + binarySearhTree.largest(binarySearhTree.root));
}
}
```

Output:

```
Is 12 in the tree? true
Is 47 in the tree? false
The current node value is 1
The current node value is 4
The current node value is 7
The current node value is 12
The current node value is 13
The current node value is 28
The current node value is 72
The smallest value in the tree is: 1
The largest value in the tree is: 72
```

The post Binary Search Tree (BST) Concrete Implementation In Java appeared first on Learn To Code Together.

]]>The post Reverse Linked List (LeetCode) – Iterative and recursive implementation appeared first on Learn To Code Together.

]]>If you want to take a try before we jump into the solution, feel free to redirect this link to do it on LeetCode.

Given a singly linked list, and you have to reverse it, for example:

Input:1->2->3->4->5->NULLOutput:5->4->3->2->1->NULL

This question can be solved by either iteratively or recursively and we gonna do them both in JavaScript.

The procedure to solve this problem by the iterative method can be constructed as follow:

- First, we just need to initialize one pointer,
`prev`

to NULL and one another variable`curr`

points to head. - Inside the loop, we need to create a variable
`nextNode`

which is assigned to the pointer of the next node after each iteration starting from the head. - Then we assign the next pointer of the
`curr`

to the`prev`

. - We assign the value of the
`curr`

to the`prev`

variable. - After that, we assign the value of the
`nextNode`

to the`curr`

. - Once our
`curr`

variable reaches the last element of the list, we terminate the loop and return`prev`

.

Now let’s do it in JavaScript. First, we create a function `reverseList`

which takes one parameter `head`

, then inside this function, we create 2 variables, `prev`

with the value of NULL and the `head`

value is assigned to the `curr`

variable.

/** Definition for singly-linked list. function ListNode(val) { this.val = val; this.next = null; }/ /* @param {ListNode} head @return {ListNode} */ var reverseList = function(head) { let prev = null; let curr = head; };

The code in the comment is the definition for a singly linked list, it works as a constructor when we create a new node inside the `reverseList`

function.

Now, we create a while loop, we do actual work inside this loop. We create a new variable `nextNode`

which helps us point to the next node of the current node in each iteration. Then we swap values, the `curr.next`

typically points to the next node of the current node, but we change it to point to the `prev`

node, then we take the value of the `head`

and assign it to the `prev`

, and the `curr`

now points to the `nextNode`

, the loop will terminate when the head pointer reaches to NULL:

while(curr != null){ let nextNode = curr.next; curr.next = prev; prev = curr; curr = nextNode; }

And when we have done, everything we need is to return the `prev`

, which is the pointer of the list has been reversed, the complete code is shown below:

var reverseList = function(head) { let prev = null; let curr = head; while(curr != null){ let nextNode = curr.next; curr.next = prev; prev = curr; curr = nextNode; } return prev };

I think it’s still a little ambiguous up to this point, let’s clarify it a little more, suppose we have a singly linked list with structure as such `1 -> 2 -> 3 -> NULL`

and want to reverse it to `3 -> 2 -> 1 -> NULL`

:

prev = null curr = head On the first iteration: nextNode = curr.next <=> nextNode = 2 curr.next = prev <=> curr.next = NULL (before points to 2, then now it points backward to NULL) prev = curr <=> prev = 1 curr = nextNode <=> head = 2 After the first iteration, now we have: nextNode = 2 curr.next = NULL prev = 1 curr = 2 (after the first iteration, now the curr is the second node) On the second iteration: nextNode = curr.next <=> nextNode = 3 curr.next = prev <=> curr.next = 1 (before it points to the third now, but now we make it points back to the 1st node) prev = curr <=> prev = 2 curr = nextNode <=> curr = 3 After the second iteration, now we have: nextNode = 3 curr.next = 1 prev = 2 curr = 3 (now the curr is the third node) On the third iteration: nextNode = curr.next <=> nextNode = NULL curr.next = prev <=> curr.next = 2 (before it points to the last node, which is NULL, now it points to the second node) prev = curr <=> prev = 3 curr = nextNode <=> curr = NULL After the third iteration, now we have: nextNode = NULL curr.next = 2 prev = 3 curr = NULL On the fourth iteration: Because the condition curr != NULL is no longer true (which means we have reached the last node), then this loop is terminated.

The recursive approach is a little trickier than the iterative method. But the general idea is listed as follow:

- Start from the head and assign it to the
`curr`

variable - If
`curr`

is NULL, return`curr`

- Create a variable has a role as a pointer and do the recursive call to the next
`curr`

- If the
`curr.next`

is NULL, which means it is the last node of our list, so we make it as the head of our list, as we are doing the reverse linked list - We recursively traverse the list
- Set
`curr.next.next`

to`curr`

- Set
`curr.next`

to NULL.

To help us better envision, first, let’s take a look at some visualized examples before we get into some code. Suppose that we have a singly linked list `1->2->3->4->NULL`

and want it will be `4->3->2->1->NULL`

after our reverse function:

For the first recursive call, we set `curr`

as head, then we check whether `curr`

and `curr.next`

is null or not, because both of them are not null, we do another recursive call:

On the second recursive call, we change our `curr`

to the second node, and we observe that the `curr`

and `curr.next`

are both still not null, then we proceed another recursive call:

On the third recursive call is pretty much the same as the first and the second want, now the `curr`

is the third node and `curr`

and `curr.next`

are still not null:

On this recursive call, we have seen something different. The `curr`

node is in the last node because `curr.next`

is null, then it is now our new head as in our base condition:

After the fifth recursive call, now our head is the forth node and we recursively traverse the list and make the third node as our `curr`

, also we set the `curr.next.next`

which initially pointed to null in this picture now it points back `curr`

itseft and `curr.next`

to null.

On the sixth recursive call, things go pretty similar to the last one, now we already see that the last node now points to the third node, and in a similar manner the third node now points back to the second node:

Here we go again:

And finally, we have a reversed linked list by using a recursive function:

The code can be translated pretty much the same as we have discussed above:

/**Definition for singly-linked list.function ListNode(val) {this.val = val;this.next = null;}/ /*@param {ListNode} head@return {ListNode}*/var reverseList = function(head) { let curr = head;if(curr == null || curr.next == null)returncurr; let pointer = reverseList(curr.next); curr.next.next = curr; curr.next = null;returnpointer; };

First, we define a `curr`

as head, after then we have a base case here, if `curr`

is null or `curr.next`

is null, we return `curr`

. We have another variable `pointer`

which do the recursive call consecutively until the base case is met. And once `curr.next`

is null which means we reach to the end of our list, we will do some next steps, set `curr.next.next`

to `curr`

and `curr.next`

to null. Finally, we return this variable `pointer`

which is what we should return.

The post Reverse Linked List (LeetCode) – Iterative and recursive implementation appeared first on Learn To Code Together.

]]>The post Top 150 best practice LeetCode’s problems sorted by difficulties appeared first on Learn To Code Together.

]]>**Two Sum****Maximum Subarray****Valid Parentheses****Best Time to Buy and Sell Stock****House Robber****Reverse Linked List****Single Number****Merge Two Sorted Lists****Climbing Stairs****Symmetric Tree****Intersection of Two Linked Lists****Reverse Integer****Move Zeroes****Path Sum III****Min Stack****Inverse Binary Tree****Merge Two Binary Tree****Majority Element****Palindrome Linked List****Find All Numbers Disappeared in an Array****Linked List Cycle****Remove Duplicates from Sorted Array****Diameter of Binary Tree****Shortest Unsorted Continuous Subarray****Rotate Array****Longest Common Prefix****Palindrome Number****Maximum Depth of Binary Tree****Jewels and Stones****Search Insert Position****Balanced Binary Tree****Convert Sorted Array to Binary Search Tree****Subtree of Another Tree****Roman to Integer****Convert BST to Greater Tree****Same Tree****Merge Sorted Array****Hamming Distance****Lowest Common Ancestor of a Binary Search Tree****Best Time to Buy and Sell Stock II****Count Primes****Min Cost Climbing Stairs****Trim a Binary Search Tree****Non-decreasing Array****Island Perimeter****First Unique Character in a String****Add Binary****Path Sum****Longest Univalue Path****Happy Number**

**Longest Substring Without Repeating Characters****Add Two Numbers****3Sum****Longest Palindromic Substring****Container With Most Water****Generate Parentheses****Number of Islands****Search in Rotated Sorted Array****Longest Increasing Subsequence****Find the Duplicate Number****Product of Array Except Self****Word Break****Merge Intervals****Letter Combinations of a Phone Number****Subarray Sum Equals K****Maximum Product Subarray****Permutations****Combination Sum****Validate Binary Search Tree****Jump Game****Kth Largest Element in an Array****Subsets****Lowest Common Ancestor of a Binary Tree****Coin Change****Course Schedule****Word Search****Next Permutation****Remove Nth Node From End of List****Unique Binary Search Trees****Construct Binary Tree from Preorder and Inorder Traversal****Group Anagrams****Find First and Last Position of Element in Sorted Array****Sort Colors****Binary Tree Inorder Traversal****Copy List with Random Pointer****Task Scheduler****Decode String****Search a 2D Matrix II****Unique Paths****Rotate Image****Word Ladder****Top K Frequent Elements****Implement Trie (Prefix Tree)****Binary Tree Level Order Traversal****Find All Anagrams in a String****Flatten Binary Tree to Linked List****Queue Reconstruction by Height****Meeting Rooms II****Sort List****House Robber III**

**Median of Two Sorted Arrays****Trapping Rain Water****Regular Expression Matching****Merge k Sorted Lists****Minimum Window Substring****Edit Distance****Largest Rectangle in Histogram****Longest Valid Parentheses****Longest Consecutive Sequence****First Missing Positive****Sliding Window Maximum****Binary Tree Maximum Path Sum****Serialize and Deserialize Binary Tree****Maximal Rectangle****Remove Invalid Parentheses****Burst Balloons****Find Median from Data Stream****Jump Game II****Word Search II****Count of Smaller Numbers After Self****Reverse Nodes in k-Group****The Skyline Problem****Best Time to Buy and Sell Stock III****Wildcard Matching****Longest Increasing Path in a Matrix****Word Ladder II****Word Break II****N-Queens****Sudoku Solver****Binary Tree Postorder Traversal****Alien Dictionary****Split Array Largest Sum****Insert Interval****Recover Binary Search Tree****Basic Calculator****Best Time to Buy and Sell Stock IV****Interleaving String****Palindrome Pairs****LFU Cache****Remove Duplicate Letters****Trapping Rain Water****Dungeon Game****Expression Add Operators****Distinct Subsequences****Smallest Range Covering Elements from K Lists****Shortest Palindrome****Palindrome Partitioning II****Russian Doll Envelopes****Longest Substring with at most K Distinct Characters****Robot Room Cleaner**

Above I have listed 150 best practice LeetCode’s coding questions from easy to hard based on the number of upvotes per each question, from highest upvote numbers and gradually decrease by. And if needed, this is the repository that has solutions for most of the problems above: *https://github.com/haoel/leetcode*. Hope you have time to practice with some of them. Happy coding!

The post Top 150 best practice LeetCode’s problems sorted by difficulties appeared first on Learn To Code Together.

]]>The post The Algorithm Design Manual – 2nd Edition (free download) appeared first on Learn To Code Together.

]]>This volume helps take some of the “mystery” out of identifying and dealing with key algorithms. Drawing heavily on the author’s own real-world experiences, the book stresses design and analysis. Coverage is divided into two parts, the first being a general guide to techniques for the design and analysis of computer algorithms. The second is a reference section, which includes a catalog of the 75 most important algorithmic problems. By browsing this catalog, readers can quickly identify what the problem they have encountered is called, what is known about it, and how they should proceed if they need to solve it. This book is ideal for the working professional who uses algorithms on a daily basis and has a need for a handy reference. This work can also readily be used in an upper-division course or as a student reference guide. THE ALGORITHM DESIGN MANUAL comes with a CD-ROM that contains: * a complete hypertext version of the full printed book. * the source code and URLs for all cited implementations. * over 30 hours of audio lectures on the design and analysis of algorithms are provided, all keyed to on-line lecture notes.

The post The Algorithm Design Manual – 2nd Edition (free download) appeared first on Learn To Code Together.

]]>The post Boosting your coding skills to the next level with these 8 awesome coding sites appeared first on Learn To Code Together.

]]>LeetCode is one of the best sites I want to introduce to you first if you want to dig into data structures and algorithms. This website has a really nice user interface, which covers many topics from basic to advanced which categorized in many essential topics. Up to 1200 problems, each problem is labeled with EASY, MEDIUM and HARD it gives you a variety of options to choose which problem is right for you. Basically, it covers all the essentials of possible aspects that one might encounter in a technical interview: data structures, algorithms, common brain teasers, dynamic programming, backtracking, linked-list, etc…Many problems on this website are having the solutions in case you stuck, each solution usually has two approaches, naive one and efficient one. Besides, you can join the contests held weekly or take mock tests to test your ability from a collection of real company questions. And there is a section about Top 100 Liked Questions or Top Interview Questions that maybe interests you. Last but not least, LeetCode has a wonderful community which can be found at Home | LeetCode Discuss, in there, people share sincerely about their coding and interview experiences you could see you are not alone, many questions exist in your head whether it silly or not probably all have been discussed there.

This is another great site that I usually visit to practice algorithm, CodeSignal also has a great user interface, for the very first time, we can explore the Arcade universe there which covers 5 sections: Intro, Database, The Core, Python, and Graph, there are many interesting and comprehensive problems there waiting for you to discover. Me personally I ofter solve problems from Challenges section, which brings you a lot of problems which have the specific duration of time to solve, when you solve a problem form this section on the time, your name will be displayed in the leaderboard of this question among with many other programmers who have solved this problem. The interesting part about this website is the variety of languages you can choose to solve a problem, unlike LeetCode which just let choose 8-10 popular programming languages, on CodeSignal, the languages supported is about 38 that you can easily pick up your favorite one whether you like imperative, functional, procedure, from C, Java, to Haskell, Lisp, Clojure, etc…There are also Interview Practice and Company Challenges sections which let you practice from real companies such as Google, Apple, Microsoft, Linkedin, etc…that make you more comfortable for your next real interview. CodeSignal also has a great community where you can find a solution to your problem or share your personal programming experience.

If you have crossed CodeSignal and LeetCode then go to this site it can be a little bit overwhelmed because the user interface is a little bit different. It seems like creators of this site are obsessed with martial arts. After logging in, you can choose your favorite programming languages (which up to 46 languages, damn, more than the languages that CodeSignal supports!) to solve the puzzles, the difficulty of the puzzles depend on your level. First, you will initially on 8 **kyu** – the newbie level, which is like the level of difficulty and the highest level is 1 **kyu**, which I can guarantee it will take much time, effort and intelligence to get into that expert level. When finish each puzzle, your solution will be recorded and you will gain more **Honor**, the honor at some certain points will leverage your kyu level. There are plenty of modes that you can choose from, which are Fundamentals, Rank Up, Practice & Repeat, Random, and Beta. And of course, the Codewars’ community is also strong and great.

HackerRank is a great place for competitive programming, which has mainly four sections: **Practice**, **Compete**, **Jobs**, and **Leaderboard**. In the first section, **Practice**, you will be immersed in a huge collection of challenges and tutorials available on it, you can practice C, C++, Java, Python, Functional Programming, SQL, Database, Mathematics, Data Structures, and even Artificial Intelligence, there are also a lot of great tutorials from this section such as Interview Preparation Kit, 30 Days of Code, 10 Days Of JavaScript, etc…They give you problems each day if you follow one of those tutorials, and in case you forget you typically send you an email. For me personally, I use HackerRank to practice functional programming (which is tough) and solving algorithms. However, if you are a newbie, I recommend you start with some basic first and reading a book is beneficial before jumping into solving problems on HackerRank, for me, I think most of the problems are hard and take time to solve. In the **Compete** section, you will typically see some contests which are held monthly, each contest contains several problems which are usually designed as medium level or above, they are tough enough to challenge you and through those contests, you can prove your problem-solving ability. Next, you can see another section, which is **Jobs** – where you can find your dream job there for a variety of available options from many different roles and locations. And finally, the **Leaderboard** section is where the top among users standing in contests.

HackerEarth also has a nice user interface and was founded by an Indian in 2012, 7 years later, it’s now considered one of the most popular sites for people who intend to practice and solve algorithms, you can practice almost anything for your next interview here, this site covers many topics such as programming languages, math, machine learning, data structures, algorithms, and also a huge number of competitions and hiring challenges are available here which can be used to be the stepping stones in your career path.

CodeChef is another Indian-based competitive programming website that provides hundreds of challenges. CodeChef is a great place for newbie and intermediate users. Plenty of easy questions and editorials and discussion forum to solve your doubts. Long contests are a great way to start off your coding career and learn new cool things. Problems are put categorically on Codechef namely Beginner, Easy, Medium etc and you can sort problems inside each category from most solved to least solved ones. This gives beginners some confidence in problem-solving and the idea of online judges.

When I finally stumped at a problem on mentioned sites above, I usually do some search online to check whether has anybody write a tutorial about this problem or not and GeeksForGeeks is like my savior that always appeared in front of my eyes, for many many problems, and each problem I am frustrated with they usually give you a full-explanation how it works and how it can be implemented by some distinct programming languages such as C, C++, Java, Python. This website is also created by some Indian guys (I guess), mainly they write tutorials for students who struggle with algorithms and many other stuff, you can come here to learn programming languages and practice interview questions as well. If you are preparing for your next coding interview, GeeksForGeeks is definitely a handy site that you should go along with.

Finally, at the end of the list, CodeForces is the place that real competitive programmers or sport programmers come to compete. **Codeforces** is a website that hosts competitive programming contests. It is maintained by a group of competitive programmers from ITMO University. Problems on this site often seem advanced and require you some particular knowledge to solve such as Tree, Linked-list, Hash Table, Breadth-First Search, etc…Contests in this site are held weekly and usually rated, top among users of this site are considered very smart and talented. Top-rated competitive programmers from this site you can easily recognize if you can see their name with red color which is **International Grandmaster**, or even the black color in the first letter and then the rest are red **Legendary grandmaster** (they have the highest rating). When you think you spend enough time practicing, then joining some contests on this site is the best way among many other ways to prove your intelligence and problem-solving skills.

The post Boosting your coding skills to the next level with these 8 awesome coding sites appeared first on Learn To Code Together.

]]>The post A Quick Introduction of the Top Data Structures for Your Programming Career & Next Coding Interview appeared first on Learn To Code Together.

]]>Niklaus Wirth, a Swiss computer scientist, wrote a book in 1976 titled *Algorithms + Data Structures = Programs.*

40+ years later, that equation still holds true. That’s why software engineering candidates have to demonstrate their understanding of data structures along with their applications.

Almost all problems require the candidate to demonstrate a deep understanding of data structures. It doesn’t matter whether you have just graduated (from a university or coding Bootcamp), or you have decades of experience.

Sometimes interview questions explicitly mention a data structure, for example, “given a binary tree.” Other times it’s implicit, like “we want to track the number of books associated with each author.”

Learning data structures is essential even if you’re just trying to get better at your current job. Let’s start with understanding the basics.

Simply put, a data structure is a container that stores data in a specific layout. This “layout” allows a data structure to be efficient in some operations and inefficient in others. Your goal is to understand data structures so that you can pick the data structure that’s most optimal for the problem at hand.

As data structures are used to store data in an organized form, and since data is the most crucial entity in computer science, the true worth of data structures is clear.

No matter what problem are you solving, in one way or another you have to deal with data — whether it’s an employee’s salary, stock prices, a grocery list, or even a simple telephone directory.

Based on different scenarios, data needs to be stored in a specific format. We have a handful of data structures that cover our need to store data in different formats.

Let’s first list the most commonly used data structures, and then we’ll cover them one by one:

- Arrays
- Stacks
- Queues
- Linked Lists
- Trees
- Graphs
- Tries (they are effectively trees, but it’s still good to call them out separately).
- Hash Tables

An array is the simplest and most widely used data structure. Other data structures like stacks and queues are derived from arrays.

Here’s an image of a simple array of size 4, containing elements (1, 2, 3 and 4).

Each data element is assigned a positive numerical value called the **Index***, *which corresponds to the position of that item in the array. The majority of languages define the starting index of the array as 0.

The following are the two types of arrays:

- One-dimensional arrays (as shown above)
- Multi-dimensional arrays (arrays within arrays)

- Insert — Inserts an element at the given index
- Get — Returns the element at the given index
- Delete — Deletes an element at the given index
- Size — Get the total number of elements in the array

- Find the second minimum element of an array
- First non-repeating integers in an array
- Merge two sorted arrays
- Rearrange Positive and negative values in an array

We are all familiar with the famous **Undo** option, which is present in almost every application. Ever wondered how it works? The idea: you store the previous states of your work (which are limited to a specific number) in the memory in such an order that the last one appears first. This can’t be done just by using arrays. That is where the Stack comes in handy.

A real-life example of Stack could be a pile of books placed in a vertical order. In order to get the book that’s somewhere in the middle, you will need to remove all the books placed on top of it. This is how the LIFO (Last In First Out) method works.

Here’s an image of the stack containing three data elements (1, 2 and 3), where 3 is at the top and will be removed first:

Basic operations of the stack:

- Push — Inserts an element at the top
- Pop — Returns the top element after removing from the stack
- isEmpty — Returns true if the stack is empty
- Top — Returns the top element without removing from the stack

- Evaluate postfix expression using a stack
- Sort values in a stack
- Check balanced parentheses in an expression

Similar to Stack, Queue is another linear data structure that stores the element in a sequential manner. The only significant difference between Stack and Queue is that instead of using the LIFO method, Queue implements the FIFO method, which is short for First in First Out.

A perfect real-life example of Queue: a line of people waiting at a ticket booth. If a new person comes, they will join the line from the end, not from the start — and the person standing at the front will be the first to get the ticket and hence leave the line.

Here’s an image of Queue containing four data elements (1, 2, 3 and 4), where 1 is at the top and will be removed first:

- Enqueue() — Inserts element to the end of the queue
- Dequeue() — Removes an element from the start of the queue
- isEmpty() — Returns true if queue is empty
- Top() — Returns the first element of the queue

- Implement stack using a queue
- Reverse first k elements of a queue
- Generate binary numbers from 1 to n using a queue

A linked list is another important linear data structure that might look similar to arrays at first but differs in memory allocation, internal structure and how basic operations of insertion and deletion are carried out.

A linked list is like a chain of nodes, where each node contains information like data and a pointer to the succeeding node in the chain. There’s a head pointer, which points to the first element of the linked list, and if the list is empty then it simply points to null or nothing.

Linked lists are used to implement file systems, hash tables, and adjacency lists.

Here’s a visual representation of the internal structure of a linked list:

Following are the types of linked lists:

- Singly Linked List (Unidirectional)
- Doubly Linked List (Bi-directional)

*InsertAtEnd*— Inserts given element at the end of the linked list*InsertAtHead*— Inserts given element at the start/head of the linked list*Delete*— Deletes given element from the linked list*DeleteAtHead*— Deletes first element of the linked list*Search*— Returns the given element from a linked list*isEmpty*— Returns true if the linked list is empty

- Reverse a linked list
- Detect loop in a linked list
- Return Nth node from the end in a linked list
- Remove duplicates from a linked list

A graph is a set of nodes that are connected to each other in the form of a network. Nodes are also called vertices. A **pair(x,y)** is called an **edge***,* which indicates that vertex **x** is connected to vertex **y**. An edge may contain weight/cost, showing how much cost is required to traverse from vertex x to y*.*

Types of Graphs:

- Undirected Graph
- Directed Graph

In a programming language, graphs can be represented using two forms:

- Adjacency Matrix
- Adjacency List

Common graph traversing algorithms:

- Breadth-First Search
- Depth First Search

- Implement Breadth and Depth First Search
- Check if a graph is a tree or not
- Count number of edges in a graph
- Find the shortest path between two vertices

A tree is a hierarchical data structure consisting of vertices (nodes) and edges that connect them. Trees are similar to graphs, but the key point that differentiates a tree from the graph is that a cycle cannot exist in a tree.

Trees are extensively used in Artificial Intelligence and complex algorithms to provide an efficient storage mechanism for problem-solving.

Here’s an image of a simple tree, and basic terminologies used in a tree data structure:

The following are the types of trees:

- N-ary Tree
- Balanced Tree
- Binary Tree
- Binary Search Tree
- AVL Tree
- Red Black Tree
- 2–3 Tree

Out of the above, Binary Tree and Binary Search Tree are the most commonly used trees.

- Find the height of a binary tree
- Find kth maximum value in a binary search tree
- Find nodes at “k” distance from the root
- Find ancestors of a given node in a binary tree

Trie, which is also known as “Prefix Trees”, is a tree-like data structure that proves to be quite efficient for solving problems related to strings. It provides fast retrieval and is mostly used for searching words in a dictionary, providing auto suggestions in a search engine, and even for IP routing.

Here’s an illustration of how three words “top”, “thus”, and “their” are stored in Trie:

The words are stored in the top to the bottom manner where green colored nodes “p”, “s” and “r” indicates the end of “top”, “thus”, and “their” respectively.

Commonly asked Trie interview questions:

- Count total number of words in Trie
- Print all words stored in Trie
- Sort elements of an array using Trie
- Form words from a dictionary using Trie
- Build a T9 dictionary

Hashing is a process used to uniquely identify objects and store each object at some pre-calculated unique index called its “key.” So, the object is stored in the form of a “key-value” pair, and the collection of such items is called a “dictionary.” Each object can be searched using that key. There are different data structures based on hashing, but the most commonly used data structure is the **hash table**.

Hash tables are generally implemented using arrays.

The performance of hashing data structure depends upon these three factors:

- Hash Function
- Size of the Hash Table
- Collision Handling Method

Here’s an illustration of how the hash is mapped in an array. The index of this array is calculated through a Hash Function.

- Find symmetric pairs in an array
- Trace complete path of a journey
- Find if an array is a subset of another array
- Check if given arrays are disjoint

The above are the top eight data structures that you should definitely know before walking into a coding interview.

*If you are looking for resources on data structures for coding interviews, look at the interactive & challenge based courses: Data Structures for Coding Interviews (Python, Java, or JavaScript).*

*For more advanced questions, look at Coderust 3.0: Faster Coding Interview Preparation with Interactive Challenges & Visualizations.*

If you are preparing for a software engineering interviews, here’s a comprehensive roadmap to prepare for coding Interviews.

Good luck and happy learning!

Source: Fahim ul Haq

The post A Quick Introduction of the Top Data Structures for Your Programming Career & Next Coding Interview appeared first on Learn To Code Together.

]]>The post What is Graph and its representation appeared first on Learn To Code Together.

]]>In this article, we will learn about what is a Graph and also its representation in computer science. After reading this article, you’ll be able to understand what graph is and some fundamental concepts around it. If you want to stick with learning algorithms for a long time, understanding the graph is essential and should be got accustomed to early. This is a crucial concept to learn because many algorithms today are used and implemented by using graphs, some of those will appear in the next article.

**A graph** is a data structure that consists of a finite set of **vertices**, and also called **nodes**, each line connecting vertices we call it an **edge**. The edges of the graph are represented as ordered or unordered-pairs, depending on whether the graph is **directed** or **undirected**.

More formally, in mathematics, and more specifically in graph theory, a **graph** is a structure amounting to a set of objects in which some pairs of the objects are in some sense “related”. The objects correspond to mathematical abstractions called vertices (also called *nodes* or *points*) and each of the related pairs of vertices is called an *edge*.

**Graphs **are used to solve many real-life problems. Graphs are used to represent networks. The networks may include paths in a city or telephone network or circuit network, solving Rubik’s cube. Graphs are also used in social networks like LinkedIn, Facebook. For example, on Facebook, each person is represented with a vertex(or node). Each node is a structure and contains information like person id, name, gender, locale, etc.

Undirected Graph Directed Graph

In those 2 graphs above, we can see totally a set of 6 vertices (6 nodes), which represent 6 numbers 0, 1, 2, 3, 4, 5, if you want to talk about one of those vertices on the graph, such as 1, we call it a **vertex**. We also can see several edges which connect vertices here, for example in the undirected graph, let’s count the total vertices and edges in this graph:

V = {0, 1, 2, 3, 4, 5}

E = {{0, 4}, {0,5}, {0, 2}, {1, 5}, {1, 4}, {2,4}, {3,2}, {4,5}}

We usually denote a set of edges as ** E** and a set of vertices as

However, in the edge set * E* for the directed graph, there is the directional flow that’s important to how the graph is structured, and therefore, the edge set is represented as ordered pairs like this:

E = {(0, 4), (0,2), (0,5), (1, 4), (1, 5), (2, 3), (2, 4)}

We easily can distinguish the difference between the undirected graph and directed graph in which each edge goes both ways and we don’t see a direction, or we don’t see an arrow. But in the directed graph, we can see the arrows which indicate the direction of the edges and don’t necessarily go both ways.

Back to the undirected graph above, from vertex number 1 how many ways that we can go to get to the vertex number 5? As you can see, there are several ways. You can go from 1 then 4 then finally 5. Or you want to traverse more before reaching the final vertex that means you go from 1 then go to 4 then 0 then finally 5. But go through 1 to 5 is the fastest way. We call each the ways of this procedure as a **path** and go through 1 to 5 is the **shortest path** in which we just have to traverse 2 edges, two first approaches that go from 1 to reach 5 are not considered as shortest paths because they required more traversal between edges than the last approach.

When a path goes from a particular vertex back to itself, we call it a **cycle**. A graph may contain many cycles. We see one of them here, starting with the vertex number 0 -> 1 -> 2 -> 3 -> 4 -> 5 then go back to 0:

We also can see something new here, there are some numbers put on edges, we call those numbers as the **weight** of those edges and for a graph whose edges have weights is a **weighted graph**. Pay attention back to the first two images, we can see that there is no number in edges, so those graphs are **unweighted-graph **with no weight in edges.

In the case of a road map, if you want to find the shortest route between two locations, you need to find out a route between two vertices which has the minimum sum of edge weights over all paths between the two vertices. Presume that we want to go from Sponge to Plankton, the **shortest path** here is we go from Sponge to Patrick then Karen finally to Plankton with the total distance is 48 miles.

Now look at the image above, have you notice that there is no cycle in this **unweighted directed graph**, for those particular graphs like this, we call it **directed acyclic graph** or **dag** for short. We say that a directed edge **leaves** one vertex and **enters** another, as you can see with the vertex number 4, there is an edge that one vertex enters to 4, which is 3, and there are 2 vertices from 4 point to, which are 5 and 6. We call the number of edges leaving is its **out-degree **and **in-degree** with the number of edges entering the vertex.

After absorbing a lot of different terminologies. Now is the time for us to…relax, just kidding, wonder how can we represent those graphs in some specific ways. Actually there are some ways to handle such a thing but each of them all has its advantages and disadvantages. And three most common ways to represent a graph are **Edge lists**, **Adjacency matrices**, and **Adjacency lists**.

As mentioned above, those 3 ways of representing graph have their own advantages and disadvantages which are measured by 3 main factors, the first factor comes in our mind is how much memory or space we need for each representation which can be expressed in term of asymptotic notation, the two last factors are related to time, we should determine how long we can find a given edge is in the graph or not and how long it takes to find the neighbors of a given vertex.

A vertex is said to be * incident *to an edge if the edge is connected to the vertex. The first simple way I introduce here to represent a graph is the edge lists, which is an array or a list simply contains edges. In order to represent an edge, we just need an array containing two vertex numbers or containing the vertex of numbers that the edges are incident on, like the way we do it in the first example when I listed all the edges of the undirected graph. In case edges have weight, you just need to add the third element to an array which represents for the weight, if the graph has more than 2 vertices, each edge we represent it in the sub-array inside all edges array. Since each edge contains just two or three numbers, the total space for an edge list is Θ(

Here is the way how we can represent the graph below in edge lists fashion:

```
var unweightedEdgeLists = [
[0, 4],
[0, 5],
[0, 2],
[1, 4],
[1, 5],
[2, 3],
[2, 4],
[4, 5]
]
```

Simply we write down 2 vertices of an edge of this graph into an array which are enclosed by the array which contains all the edges.

Or in this particular example above, this is a weighted graph, we do the same thing like the way we did with unweighted graph but just need to add the third element in each sub-array to represent that each edge has the weight:

```
var weightedEdgeLists = [
[0, 4, 2],
[0, 5, 3],
[0, 2, 1],
[1, 4, 4],
[1, 5, 5],
[2, 3, 6],
[2, 4, 7],
[4, 5, 8]
]
```

Edge lists are simple, but if we want to find whether the graph contains a particular edge, we have to search through the edge list. If the edges appear in the edge list in no particular order, that’s a linear search through vertical bar edges. Which leads us to learn about two other ways of graph representation below.

Here is another way to represent a graph, which is **adjacency matrix**. We simply just count the total vertices and write down the matrices (V is a set of finite vertices). The adjacency matrices for an undirected graph is symmetric like the one you will see underneath. We list all the vertices in rows and in columns where the entry in row *i *and column *j*, we put 1 only when this graph contains an edge (*i, j*), otherwise, we put 0 to the corresponding position:

0 | 1 | 2 | 3 | 4 | 5 | 6 | |

0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 |

1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 |

2 | 1 | 0 | 0 | 1 | 1 | 0 | 0 |

3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |

4 | 1 | 1 | 1 | 0 | 0 | 1 | 0 |

5 | 1 | 1 | 0 | 0 | 1 | 0 | 1 |

6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |

Or you can also represent it in a particular language, such as JavaScript:

```
var adjacencyMatrices = [
[0, 0, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 1, 1, 0],
[1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 1, 0],
[1, 1, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 0]
]
```

This kind of representation, adjacency matrices is predominant when compared with edge lists with respect to the time to find out whether a given edge is in the graph or not by looking up the corresponding matrix. With an adjacency matrix, we can find out whether an edge is present in **constant time**. We can query whether edge (*i*,*j*) is in the graph by looking at `adjacencyMatrices[i][j]`

== 1 or not, for example, let’s choose two vertices which are 0 (as *i*) and 2 (as *j*), we know there is an edge between two vertices by looking up the table above.

But there are some disadvantages for this implementation because we have to list a matrix with elements, so it takes space, even if the graph is **sparse** (which contains just a few numbers of edges) we still have to spend many spaces for 0s and just a few edges. Look at the example below:

Here we just can see 2 edges in this graph, which are {0, 2} and {5,6} but if we choose adjacency matrices as our representation, it should look like this:

```
var sparedGraph = [
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0]
]
```

And there is another disadvantage of adjacency matrices, presume that we want to find out which vertices are adjacent to a given vertex *i*. Like the example above, in the 6th row which is the row of vertex number 5, if we want to check how many vertices are adjacent to this vertex, we go through from begin to the end of this row, even we just found one vertex is adjected with this vertex number 5 in the last element.

**Adjacency lists** are the combination of Edge lists and Adjacency matrices. For each vertex *i *in a graph, we store an array of the vertices adjacent on it (or its neighbors). We typically have an array of |*V*| adjacency lists, one adjacency list per vertex. Now here is one more example:

Here now we have a directed graph, and because it’s not both ways, as can be seen, so we need to represent the graph in the correct order, for example, in this image, we have an edge like this (0, 1), not like this (1, 0):

```
var adjancencyLists = [
[1], // neighbor of vertex number 0
[3], // neighbor of vertex number 1
[3], // neighbor of vertex number 2
[4], // neighbor of vertex number 3
[1], // neighbor of vertex number 4
[1] // neighbor of vertex number 5
]
```

With the same graph but undirected, here is how we represent this graph:

```
var adjancencyLists = [
[1], // neighbor of vertex number 0
[3, 4, 5], // neighbors of vertex number 1
[3], // neighbor of vertex number 2
[1, 2, 4], // neighbors of vertex number 3
[1, 3], // neighbors of vertex number 4
[1] // neighbor of vertex number 5
]
```

Adjacency lists have several advantages. For a graph with *N *vertices and *M* edges, the memory used depends on *M*. This makes the adjacency lists ideal for storing sparse graphs. Generally, for a directed graph, the space we need to allocate to adjacency lists is elements, for undirected graph we need elements for this procedure because each edge appears exactly twice in the adjacency lists, once in list and once in list, and there are |E|edges. Let’s assume we have a directed graph with vertices and edges. Remember the adjacency matrices? In this case if we used adjacency matrices we had to store elements. Nevertheless, we just have to store 10000 vertices in case we use adjacency lists. Another advantage of adjacency lists is we can get to each vertex’s adjacency list in **constant time** because we just have to index into an array. In order to find a particular edge is present on the graph or not, we first need a constant time to get to the vertex then look for in neighbors. The time we find out an edge depends upon the total vertices and the **degree **of this vertex. For example, we want to find an edge , first, we need to traverse all the previous vertices until we reach the vertex number 5 then if 10 is the last vertex of this graph, we still have to traverse all of the previous vertices to get the vertex number 10 , but still is constant time.

The post What is Graph and its representation appeared first on Learn To Code Together.

]]>