The post Demystifying the blockchain concept – In simple English appeared first on Learn To Code Together.

]]>The answer to this question is all about the underlying technology of this digital currency, which is **blockchain**. If you understand what is a blockchain and how does it work, you will understand the heart of bitcoin and many other cryptocurrencies.

As a reader, I know your time is precious and I will try my best to demystify the blockchain concept in the simplest way as possible. At the end of the article, there is no esoteric jargon and you will equip yourself with a fundamental understanding of what is blockchain and its relationship with many cryptocurrencies including Bitcoin.

From Wikipedia**, Blockchain** was invented by a person (or group of people) using the name Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency** bitcoin**.

As the name it indicates, a blockchain is a “chain” of blocks, each block typically holds some data such as information about a transaction, the unique hash of this block and the hash of the previous block. It represents a sequence of immutable records that you cannot break in the middle and you just can append new blocks into it at the end in the chronological order. Each block contains the **hash** of the previous block, hence the name “blockchain”.

To make each block secure and connect, a superior hash function should be implemented which is impossible to tamper with and reverse engineering. Typically, the SHA-256 hash function is used to compute the hash of a block. A hash of a block is a combination of all fields in this block, which includes a timestamp, the id of the block, data of the block, and the hash of the previous block. If you try to change some information on a block, then the hash of this function will completely be changed, then the previous block on the next block will also be changed, this happens until the end of the block. With this property, we can easily detect whether the blockchain has been modified and discern a block is valid or invalid.

Because the first block in the blockchain has no previous block, its hash of the previous block is usually 0 and this first block in a blockchain is called “**a genesis block**“. For now, let’s examine how a typical blockchain looks like:

Block: Id: 1 Timestamp: 1600217999961 Transaction: { from: 'userA', to: 'userB' amount: '0.5 BTC' } Hash of the previous block: 0 Hash of the block: 924d79cb0be5fef34368080a86353b9583fbdc158fb18403e8437266db270c94 Block: Id: 2 Timestamp: 1600217999961 Transaction: { from: 'userC', to: 'userD' amount: '0.00023 BTC' } Hash of the previous block: 924d79cb0be5fef34368080a86353b9583fbdc158fb18403e8437266db270c94 Hash of the block: 5ca78669e9ff97b4d03a11d34bfc1bdc7cf2a3c4218ffdc70f0bae1b221c50da Block: Id: 3 Time stamp: 1600217999961 Transaction: { from: 'userE', to: 'userF' amount: '0.0001 BTC' } Hash of the previous block: 5ca78669e9ff97b4d03a11d34bfc1bdc7cf2a3c4218ffdc70f0bae1b221c50da Hash of the block: b80750afe9bfcb5b68d7187a8f9dbd5562036be416daa074569f96817c9c2517

As we can see, for each block in a blockchain, each block contains its own data and the hash of the previous block. If we change the hash of a block, for example, block 2, then block 3 also has to update the new hash block of its previous block.

For a more precise look on how a typical bitcoin transaction is constructed by using blockchain, check out this link.

As aforementioned, blockchain is a chain of networks that has no fiat intermediary. Blockchain uses a **peer-to-peer** network, digital assets like bitcoin, music, etc… are distributed across the global ledger. Everyone is welcomed to join and once a person joins the network, he gets the full copy of blockchain about all data such as transactions. Once a transaction has been made and verified, it is now broadcasted to every computer in the network.

Here you can take a look at the Bitcoin’s blockchain, this is publicly available for anyone to view if you click on a block, you can get the transaction data, timestamp, and some other information. As you can see, the number of transactions per hour is pretty humble, typically with bitcoin’s transaction, there is only one transaction can be carried out in every 10 minutes. Why blockchain has to do so and what if a hacker tries to manipulate a block for fraudulent activities, what would happen, let’s find out right below.

There is one classic and strenuous problem with cryptographers and many cryptocurrencies which is the “double-spending” problem. “Double-spending” means that a digital currency can be sent more than **once**. For example, I have sent to you 0.0005 Bitcoin and I can use this bitcoin again and send it back to myself or to someone else. If this problem usually occurs, it leads to inflation in the digital currency and makes this digital currency becomes radically worthless. How can blockchain be applied and solve this problem?

Hashing isn’t enough to tackle the “double spending” problem. Let’s presume I’d want to manipulate a block for illicit profiteering, then the hash of this block will be changed, and the hash of every following block would be invalid. To make those following blocks valid again, I need to recalculate all the hashes of all the following blocks to avoid any suspicion. This might seem tricky to do but it’s not impossible to do so, nowaday a computer can calculate a million hashes per second. If the process of creating a new block is slower than mutating the existing ones, I can effectively tamper with a block and recalculate all the hashes of the following blocks then there is no abnormality.

To circumvent this from happening, blockchain has a mechanism called a “**proof-of-work**“. In this mechanism, the time to create a new block or fixing the existing ones **will never be** **instant**, but instead, a computer must “**prove**” that it has solved a complex computational math problem, after a computer finds a “proof-of-work”, it **broadcasts** the block to everyone. In **bitcoin’s** case, this mechanism requires approximately **10 minutes** to create a new transaction – meaning a new block. If a computer can solve this problem on a block, then this block is eligible to add to the blockchain.

But it’s not so easy to solve this problem, for example, you need fo find out the hash with the first 30 bits are zeroes, because the SHA-256 hash function is applied, there is no alternative way to find out that number rather than guessing. To make it possible to generate a hash with the first 30 bits are zeros, Bitcoin uses an algorithm called “**Hashcash**” to calculate the hash of the block. The chance of finding a hash with the first 30 bits are zeros is 1/2^30 ~ 1/1 billion, the probability would drop exponentially if we added more zero bits. If a person wants to modify the hash of a block with first 30 bits are zeros, he needs to go through one billion possibilities for finding out this number unless he’s lucky, then he also needs to pass through 1 billion possibilities on the next block until the end to make this blockchain valid.

In bitcoin’s case, the procedure of finding a hash and adding this block to the blockchain is called “**mining**“, and those computers help to validate transactions are called “**miners**“. Because it is really challenging to solve this complex math problem and the chance to solve this problem on Bitcoin’s network is about 1/15 trillion, which means a computer has to put a lot of computational power and energy to accomplish that. To incentivize this endeavor, the first computer successfully verifies a transaction is **rewarded** with some bitcoins, and it also happens with the most other cryptocurrencies. Check out this link to see how enormous a typical Bitcoin miner looks like.

Again, a “proof-of-work” is a secure mechanism for verifying new blocks. If an attacker tries to “double-spending” by changing the transactions and modifying the hash, he needs to redo all the “**proof-of-work**” of the latter blocks and then try to catch up with and surpass the work of the honest miners, the time required would become exponentially by the number of blocks added to this blockchain. For people who just want to use blockchain to make payments, they just listen to the blocks being broadcasted and update this into their blockchain. In case there are 2 or more different copies of the same blockchain, which means there is the only one valid and all others have been tampered with, there is a blockchain’s process called “**consensus**” which eliminates the existence of multiple blocks and all users participate in and reach the “**consensus**” and take the **longest** chain available.

For example, if you’re listening to the miners and an attacker is hacking a block, he finds out a proof-of-work before any miners and this can definitely happen, but again, he also needs to change all the hashes of the next blocks, at the same time, honest miners are working on a different track, they keep adding new blocks to the blockchain and make this block longer than the attacker’s blocks, once one blockchain is outnumbered to the other, you can safely choose the longer one.

In theory, an attacker still can tamper with a block and make this blockchain valid again if he has 51% the power of all the computers in the network and this setting is called “**51% attack**“. By controlling more than 50% of computer power, the attacker can double-spending by recalculating the hash of this block, then there is over 50% chance that he can successfully modify the hashes of the subsequent blocks.

To establish trust, traditionally for a long time we need to rely on large intermediaries like banks, governments, or credit card companies to make a transaction. However, by addressing the “double-spending” problem, there are many reasons to use blockchain instead of the traditional ones.

Blockchain is decentralized, which means everyone has their own copy version of the blockchain, and blockchain is secured. However, intermediaries such as banks become more and more vulnerable and increasingly being targeted by hackers.

If you make a transaction in a bank, typically you have to wait for a day or even a week to settle the transaction, but if you use blockchain, for example, you’d want to send the remittance to relatives, all the procedure performed quickly in a few minutes.

Blockchain can be applied to minimize the cost per transaction, usually, you don’t have to pay anything for making a transaction with blockchain, but the problem arises with traditional systems when you need to add some fee by doing so.

Your privacy can be undermined when the intermediaries such as a bank or government when they keep capturing all of your information when doing a transaction, even though every transaction in the blockchain is distributed globally, but there just are some limited information for each block such as id, transaction id, time, and height, your personal information is stayed secured and isn’t exposed to the world.

Blockchain empowers a protocol named “smart contract”, where the verification and negotiation are directly performed and enforced without a third party. A transaction with a “**smart contract**” is trackable and irreversible, it provides the security outperformed the traditional systems and reduce the transaction costs.

There are many superior reasons why using blockchain which have listed above, but there are a few disadvantages of using blockchain.

Blockchain is the underlying technology for cryptocurrencies such as Bitcoin and it likely to have a great impact in the next few decades. Remember, Bitcoin is a cryptocurrency and blockchain is the technology this currency use. No one there needs to manage blockchain because it’s highly distributed to everyone by the peer-to-peer protocol. It is constructed by immutable records called “blocks” and each block is linked to the others by containing the hash of this block and the previous block. When a new block is verified and added to the blockchain, this block is broadcasted to everyone in the network and each person will have their own copy of this blockchain. Many mechanisms make blockchain secured and one of them is the concept of a “proof-of-work”. In bitcoin’s case, the time to create a new block – a transaction is about 10 minutes. A computer or a “miner” need to solve a complex computational problem for verifying a block and if it reaches the consensus, this block will be added to the blockchain. There are numerous reasons for using blockchain as aforementioned, but there are still some reasons why you might not use blockchain.

P/s: I’m a novice to the blockchain, if there are some concepts in this article you feel misleading, wrong, or something that makes you confused, please consider to give me a comment and tell me the problem I’m having in this article. One article isn’t adequate to cover all blockchain’s corners, there are still many vital concepts that I haven’t mentioned, you’re still feeling interested in blockchain, you should read some supplement and further documents to learn more about this technology.

The post Demystifying the blockchain concept – In simple English appeared first on Learn To Code Together.

]]>The post The Algorithm Design Manual – 2nd Edition (free download) appeared first on Learn To Code Together.

]]>This volume helps take some of the “mystery” out of identifying and dealing with key algorithms. Drawing heavily on the author’s own real-world experiences, the book stresses design and analysis. Coverage is divided into two parts, the first being a general guide to techniques for the design and analysis of computer algorithms. The second is a reference section, which includes a catalog of the 75 most important algorithmic problems. By browsing this catalog, readers can quickly identify what the problem they have encountered is called, what is known about it, and how they should proceed if they need to solve it. This book is ideal for the working professional who uses algorithms on a daily basis and has a need for a handy reference. This work can also readily be used in an upper-division course or as a student reference guide. THE ALGORITHM DESIGN MANUAL comes with a CD-ROM that contains: * a complete hypertext version of the full printed book. * the source code and URLs for all cited implementations. * over 30 hours of audio lectures on the design and analysis of algorithms are provided, all keyed to on-line lecture notes.

The post The Algorithm Design Manual – 2nd Edition (free download) appeared first on Learn To Code Together.

]]>The post Boosting your coding skills to the next level with these 8 awesome coding sites appeared first on Learn To Code Together.

]]>LeetCode is one of the best sites I want to introduce to you first if you want to dig into data structures and algorithms. This website has a really nice user interface, which covers many topics from basic to advanced which categorized in many essential topics. Up to 1200 problems, each problem is labeled with EASY, MEDIUM and HARD it gives you a variety of options to choose which problem is right for you. Basically, it covers all the essentials of possible aspects that one might encounter in a technical interview: data structures, algorithms, common brain teasers, dynamic programming, backtracking, linked-list, etc…Many problems on this website are having the solutions in case you stuck, each solution usually has two approaches, naive one and efficient one. Besides, you can join the contests held weekly or take mock tests to test your ability from a collection of real company questions. And there is a section about Top 100 Liked Questions or Top Interview Questions that maybe interests you. Last but not least, LeetCode has a wonderful community which can be found at Home | LeetCode Discuss, in there, people share sincerely about their coding and interview experiences you could see you are not alone, many questions exist in your head whether it silly or not probably all have been discussed there.

This is another great site that I usually visit to practice algorithm, CodeSignal also has a great user interface, for the very first time, we can explore the Arcade universe there which covers 5 sections: Intro, Database, The Core, Python, and Graph, there are many interesting and comprehensive problems there waiting for you to discover. Me personally I ofter solve problems from Challenges section, which brings you a lot of problems which have the specific duration of time to solve, when you solve a problem form this section on the time, your name will be displayed in the leaderboard of this question among with many other programmers who have solved this problem. The interesting part about this website is the variety of languages you can choose to solve a problem, unlike LeetCode which just let choose 8-10 popular programming languages, on CodeSignal, the languages supported is about 38 that you can easily pick up your favorite one whether you like imperative, functional, procedure, from C, Java, to Haskell, Lisp, Clojure, etc…There are also Interview Practice and Company Challenges sections which let you practice from real companies such as Google, Apple, Microsoft, Linkedin, etc…that make you more comfortable for your next real interview. CodeSignal also has a great community where you can find a solution to your problem or share your personal programming experience.

If you have crossed CodeSignal and LeetCode then go to this site it can be a little bit overwhelmed because the user interface is a little bit different. It seems like creators of this site are obsessed with martial arts. After logging in, you can choose your favorite programming languages (which up to 46 languages, damn, more than the languages that CodeSignal supports!) to solve the puzzles, the difficulty of the puzzles depend on your level. First, you will initially on 8 **kyu** – the newbie level, which is like the level of difficulty and the highest level is 1 **kyu**, which I can guarantee it will take much time, effort and intelligence to get into that expert level. When finish each puzzle, your solution will be recorded and you will gain more **Honor**, the honor at some certain points will leverage your kyu level. There are plenty of modes that you can choose from, which are Fundamentals, Rank Up, Practice & Repeat, Random, and Beta. And of course, the Codewars’ community is also strong and great.

HackerRank is a great place for competitive programming, which has mainly four sections: **Practice**, **Compete**, **Jobs**, and **Leaderboard**. In the first section, **Practice**, you will be immersed in a huge collection of challenges and tutorials available on it, you can practice C, C++, Java, Python, Functional Programming, SQL, Database, Mathematics, Data Structures, and even Artificial Intelligence, there are also a lot of great tutorials from this section such as Interview Preparation Kit, 30 Days of Code, 10 Days Of JavaScript, etc…They give you problems each day if you follow one of those tutorials, and in case you forget you typically send you an email. For me personally, I use HackerRank to practice functional programming (which is tough) and solving algorithms. However, if you are a newbie, I recommend you start with some basic first and reading a book is beneficial before jumping into solving problems on HackerRank, for me, I think most of the problems are hard and take time to solve. In the **Compete** section, you will typically see some contests which are held monthly, each contest contains several problems which are usually designed as medium level or above, they are tough enough to challenge you and through those contests, you can prove your problem-solving ability. Next, you can see another section, which is **Jobs** – where you can find your dream job there for a variety of available options from many different roles and locations. And finally, the **Leaderboard** section is where the top among users standing in contests.

HackerEarth also has a nice user interface and was founded by an Indian in 2012, 7 years later, it’s now considered one of the most popular sites for people who intend to practice and solve algorithms, you can practice almost anything for your next interview here, this site covers many topics such as programming languages, math, machine learning, data structures, algorithms, and also a huge number of competitions and hiring challenges are available here which can be used to be the stepping stones in your career path.

CodeChef is another Indian-based competitive programming website that provides hundreds of challenges. CodeChef is a great place for newbie and intermediate users. Plenty of easy questions and editorials and discussion forum to solve your doubts. Long contests are a great way to start off your coding career and learn new cool things. Problems are put categorically on Codechef namely Beginner, Easy, Medium etc and you can sort problems inside each category from most solved to least solved ones. This gives beginners some confidence in problem-solving and the idea of online judges.

When I finally stumped at a problem on mentioned sites above, I usually do some search online to check whether has anybody write a tutorial about this problem or not and GeeksForGeeks is like my savior that always appeared in front of my eyes, for many many problems, and each problem I am frustrated with they usually give you a full-explanation how it works and how it can be implemented by some distinct programming languages such as C, C++, Java, Python. This website is also created by some Indian guys (I guess), mainly they write tutorials for students who struggle with algorithms and many other stuff, you can come here to learn programming languages and practice interview questions as well. If you are preparing for your next coding interview, GeeksForGeeks is definitely a handy site that you should go along with.

Finally, at the end of the list, CodeForces is the place that real competitive programmers or sport programmers come to compete. **Codeforces** is a website that hosts competitive programming contests. It is maintained by a group of competitive programmers from ITMO University. Problems on this site often seem advanced and require you some particular knowledge to solve such as Tree, Linked-list, Hash Table, Breadth-First Search, etc…Contests in this site are held weekly and usually rated, top among users of this site are considered very smart and talented. Top-rated competitive programmers from this site you can easily recognize if you can see their name with red color which is **International Grandmaster**, or even the black color in the first letter and then the rest are red **Legendary grandmaster** (they have the highest rating). When you think you spend enough time practicing, then joining some contests on this site is the best way among many other ways to prove your intelligence and problem-solving skills.

The post Boosting your coding skills to the next level with these 8 awesome coding sites appeared first on Learn To Code Together.

]]>The post A Quick Introduction of the Top Data Structures for Your Programming Career & Next Coding Interview appeared first on Learn To Code Together.

]]>Niklaus Wirth, a Swiss computer scientist, wrote a book in 1976 titled *Algorithms + Data Structures = Programs.*

40+ years later, that equation still holds true. That’s why software engineering candidates have to demonstrate their understanding of data structures along with their applications.

Almost all problems require the candidate to demonstrate a deep understanding of data structures. It doesn’t matter whether you have just graduated (from a university or coding Bootcamp), or you have decades of experience.

Sometimes interview questions explicitly mention a data structure, for example, “given a binary tree.” Other times it’s implicit, like “we want to track the number of books associated with each author.”

Learning data structures is essential even if you’re just trying to get better at your current job. Let’s start with understanding the basics.

Simply put, a data structure is a container that stores data in a specific layout. This “layout” allows a data structure to be efficient in some operations and inefficient in others. Your goal is to understand data structures so that you can pick the data structure that’s most optimal for the problem at hand.

As data structures are used to store data in an organized form, and since data is the most crucial entity in computer science, the true worth of data structures is clear.

No matter what problem are you solving, in one way or another you have to deal with data — whether it’s an employee’s salary, stock prices, a grocery list, or even a simple telephone directory.

Based on different scenarios, data needs to be stored in a specific format. We have a handful of data structures that cover our need to store data in different formats.

Let’s first list the most commonly used data structures, and then we’ll cover them one by one:

- Arrays
- Stacks
- Queues
- Linked Lists
- Trees
- Graphs
- Tries (they are effectively trees, but it’s still good to call them out separately).
- Hash Tables

An array is the simplest and most widely used data structure. Other data structures like stacks and queues are derived from arrays.

Here’s an image of a simple array of size 4, containing elements (1, 2, 3 and 4).

Each data element is assigned a positive numerical value called the **Index***, *which corresponds to the position of that item in the array. The majority of languages define the starting index of the array as 0.

The following are the two types of arrays:

- One-dimensional arrays (as shown above)
- Multi-dimensional arrays (arrays within arrays)

- Insert — Inserts an element at the given index
- Get — Returns the element at the given index
- Delete — Deletes an element at the given index
- Size — Get the total number of elements in the array

- Find the second minimum element of an array
- First non-repeating integers in an array
- Merge two sorted arrays
- Rearrange Positive and negative values in an array

We are all familiar with the famous **Undo** option, which is present in almost every application. Ever wondered how it works? The idea: you store the previous states of your work (which are limited to a specific number) in the memory in such an order that the last one appears first. This can’t be done just by using arrays. That is where the Stack comes in handy.

A real-life example of Stack could be a pile of books placed in a vertical order. In order to get the book that’s somewhere in the middle, you will need to remove all the books placed on top of it. This is how the LIFO (Last In First Out) method works.

Here’s an image of the stack containing three data elements (1, 2 and 3), where 3 is at the top and will be removed first:

Basic operations of the stack:

- Push — Inserts an element at the top
- Pop — Returns the top element after removing from the stack
- isEmpty — Returns true if the stack is empty
- Top — Returns the top element without removing from the stack

- Evaluate postfix expression using a stack
- Sort values in a stack
- Check balanced parentheses in an expression

Similar to Stack, Queue is another linear data structure that stores the element in a sequential manner. The only significant difference between Stack and Queue is that instead of using the LIFO method, Queue implements the FIFO method, which is short for First in First Out.

A perfect real-life example of Queue: a line of people waiting at a ticket booth. If a new person comes, they will join the line from the end, not from the start — and the person standing at the front will be the first to get the ticket and hence leave the line.

Here’s an image of Queue containing four data elements (1, 2, 3 and 4), where 1 is at the top and will be removed first:

- Enqueue() — Inserts element to the end of the queue
- Dequeue() — Removes an element from the start of the queue
- isEmpty() — Returns true if queue is empty
- Top() — Returns the first element of the queue

- Implement stack using a queue
- Reverse first k elements of a queue
- Generate binary numbers from 1 to n using a queue

A linked list is another important linear data structure that might look similar to arrays at first but differs in memory allocation, internal structure and how basic operations of insertion and deletion are carried out.

A linked list is like a chain of nodes, where each node contains information like data and a pointer to the succeeding node in the chain. There’s a head pointer, which points to the first element of the linked list, and if the list is empty then it simply points to null or nothing.

Linked lists are used to implement file systems, hash tables, and adjacency lists.

Here’s a visual representation of the internal structure of a linked list:

Following are the types of linked lists:

- Singly Linked List (Unidirectional)
- Doubly Linked List (Bi-directional)

*InsertAtEnd*— Inserts given element at the end of the linked list*InsertAtHead*— Inserts given element at the start/head of the linked list*Delete*— Deletes given element from the linked list*DeleteAtHead*— Deletes first element of the linked list*Search*— Returns the given element from a linked list*isEmpty*— Returns true if the linked list is empty

- Reverse a linked list
- Detect loop in a linked list
- Return Nth node from the end in a linked list
- Remove duplicates from a linked list

A graph is a set of nodes that are connected to each other in the form of a network. Nodes are also called vertices. A **pair(x,y)** is called an **edge***,* which indicates that vertex **x** is connected to vertex **y**. An edge may contain weight/cost, showing how much cost is required to traverse from vertex x to y*.*

Types of Graphs:

- Undirected Graph
- Directed Graph

In a programming language, graphs can be represented using two forms:

- Adjacency Matrix
- Adjacency List

Common graph traversing algorithms:

- Breadth-First Search
- Depth First Search

- Implement Breadth and Depth First Search
- Check if a graph is a tree or not
- Count number of edges in a graph
- Find the shortest path between two vertices

A tree is a hierarchical data structure consisting of vertices (nodes) and edges that connect them. Trees are similar to graphs, but the key point that differentiates a tree from the graph is that a cycle cannot exist in a tree.

Trees are extensively used in Artificial Intelligence and complex algorithms to provide an efficient storage mechanism for problem-solving.

Here’s an image of a simple tree, and basic terminologies used in a tree data structure:

The following are the types of trees:

- N-ary Tree
- Balanced Tree
- Binary Tree
- Binary Search Tree
- AVL Tree
- Red Black Tree
- 2–3 Tree

Out of the above, Binary Tree and Binary Search Tree are the most commonly used trees.

- Find the height of a binary tree
- Find kth maximum value in a binary search tree
- Find nodes at “k” distance from the root
- Find ancestors of a given node in a binary tree

Trie, which is also known as “Prefix Trees”, is a tree-like data structure that proves to be quite efficient for solving problems related to strings. It provides fast retrieval and is mostly used for searching words in a dictionary, providing auto suggestions in a search engine, and even for IP routing.

Here’s an illustration of how three words “top”, “thus”, and “their” are stored in Trie:

The words are stored in the top to the bottom manner where green colored nodes “p”, “s” and “r” indicates the end of “top”, “thus”, and “their” respectively.

Commonly asked Trie interview questions:

- Count total number of words in Trie
- Print all words stored in Trie
- Sort elements of an array using Trie
- Form words from a dictionary using Trie
- Build a T9 dictionary

Hashing is a process used to uniquely identify objects and store each object at some pre-calculated unique index called its “key.” So, the object is stored in the form of a “key-value” pair, and the collection of such items is called a “dictionary.” Each object can be searched using that key. There are different data structures based on hashing, but the most commonly used data structure is the **hash table**.

Hash tables are generally implemented using arrays.

The performance of hashing data structure depends upon these three factors:

- Hash Function
- Size of the Hash Table
- Collision Handling Method

Here’s an illustration of how the hash is mapped in an array. The index of this array is calculated through a Hash Function.

- Find symmetric pairs in an array
- Trace complete path of a journey
- Find if an array is a subset of another array
- Check if given arrays are disjoint

The above are the top eight data structures that you should definitely know before walking into a coding interview.

*If you are looking for resources on data structures for coding interviews, look at the interactive & challenge based courses: Data Structures for Coding Interviews (Python, Java, or JavaScript).*

*For more advanced questions, look at Coderust 3.0: Faster Coding Interview Preparation with Interactive Challenges & Visualizations.*

If you are preparing for a software engineering interviews, here’s a comprehensive roadmap to prepare for coding Interviews.

Good luck and happy learning! 🙂

Source: Fahim ul Haq

The post A Quick Introduction of the Top Data Structures for Your Programming Career & Next Coding Interview appeared first on Learn To Code Together.

]]>The post What is Graph and its representation appeared first on Learn To Code Together.

]]>In this article, we will learn about what is a Graph and also its representation in computer science. After reading this article, you’ll be able to understand what graph is and some fundamental concepts around it. If you want to stick with learning algorithms for a long time, understanding the graph is essential and should be got accustomed to early. This is a crucial concept to learn because many algorithms today are used and implemented by using graphs, some of those will appear in the next article.

**A graph** is a data structure that consists of a finite set of **vertices**, and also called **nodes**, each line connecting vertices we call it an **edge**. The edges of the graph are represented as ordered or unordered-pairs, depending on whether the graph is **directed** or **undirected**.

More formally, in mathematics, and more specifically in graph theory, a **graph** is a structure amounting to a set of objects in which some pairs of the objects are in some sense “related”. The objects correspond to mathematical abstractions called vertices (also called *nodes* or *points*) and each of the related pairs of vertices is called an *edge*.

**Graphs **are used to solve many real-life problems. Graphs are used to represent networks. The networks may include paths in a city or telephone network or circuit network, solving Rubik’s cube. Graphs are also used in social networks like LinkedIn, Facebook. For example, on Facebook, each person is represented with a vertex(or node). Each node is a structure and contains information like person id, name, gender, locale, etc.

In those 2 graphs above, we can see totally a set of 6 vertices (6 nodes), which represent 6 numbers 0, 1, 2, 3, 4, 5, if you want to talk about one of those vertices on the graph, such as 1, we call it a **vertex**. We also can see several edges which connect vertices here, for example in the undirected graph, let’s count the total vertices and edges in this graph:

V = {0, 1, 2, 3, 4, 5}

E = {{0, 4}, {0,5}, {0, 2}, {1, 5}, {1, 4}, {2,4}, {3,2}, {4,5}}

We usually denote a set of edges as ** E** and a set of vertices as

However, in the edge set * E* for the directed graph, there is the directional flow that’s important to how the graph is structured, and therefore, the edge set is represented as ordered pairs like this:

E = {(0, 4), (0,2), (0,5), (1, 4), (1, 5), (2, 3), (2, 4)}

We easily can distinguish the difference between the undirected graph and directed graph in which each edge goes both ways and we don’t see a direction, or we don’t see an arrow. But in the directed graph, we can see the arrows which indicate the direction of the edges and don’t necessarily go both ways.

Back to the undirected graph above, from vertex number 1 how many ways that we can go to get to the vertex number 5? As you can see, there are several ways. You can go from 1 then 4 then finally 5. Or you want to traverse more before reaching the final vertex that means you go from 1 then go to 4 then 0 then finally 5. But go through 1 to 5 is the fastest way. We call each the ways of this procedure as a **path** and go through 1 to 5 is the **shortest path** in which we just have to traverse 2 edges, two first approaches that go from 1 to reach 5 are not considered as shortest paths because they required more traversal between edges than the last approach.

When a path goes from a particular vertex back to itself, we call it a **cycle**. A graph may contain many cycles. We see one of them here, starting with the vertex number 0 -> 1 -> 2 -> 3 -> 4 -> 5 then go back to 0:

We also can see something new here, there are some numbers put on edges, we call those numbers as the **weight** of those edges and for a graph whose edges have weights is a **weighted graph**. Pay attention back to the first two images, we can see that there is no number in edges, so those graphs are **unweighted-graph **with no weight in edges.

In the case of a road map, if you want to find the shortest route between two locations, you need to find out a route between two vertices which has the minimum sum of edge weights over all paths between the two vertices. Presume that we want to go from Sponge to Plankton, the **shortest path** here is we go from Sponge to Patrick then Karen finally to Plankton with the total distance is 48 miles.

Now look at the image above, have you notice that there is no cycle in this **unweighted directed graph**, for those particular graphs like this, we call it **directed acyclic graph** or **dag** for short. We say that a directed edge **leaves** one vertex and **enters** another, as you can see with the vertex number 4, there is an edge that one vertex enters to 4, which is 3, and there are 2 vertices from 4 point to, which are 5 and 6. We call the number of edges leaving is its **out-degree **and **in-degree** with the number of edges entering the vertex.

After absorbing a lot of different terminologies. Now is the time for us to…relax, just kidding, wonder how can we represent those graphs in some specific ways. Actually there are some ways to handle such a thing but each of them all has its advantages and disadvantages. And three most common ways to represent a graph are **Edge lists**, **Adjacency matrices**, and **Adjacency lists**.

As mentioned above, those 3 ways of representing graph have their own advantages and disadvantages which are measured by 3 main factors, the first factor comes in our mind is how much memory or space we need for each representation which can be expressed in term of asymptotic notation, the two last factors are related to time, we should determine how long we can find a given edge is in the graph or not and how long it takes to find the neighbors of a given vertex.

A vertex is said to be * incident *to an edge if the edge is connected to the vertex. The first simple way I introduce here to represent a graph is the edge lists, which is an array or a list simply contains edges. In order to represent an edge, we just need an array containing two vertex numbers or containing the vertex of numbers that the edges are incident on, like the way we do it in the first example when I listed all the edges of the undirected graph. In case edges have weight, you just need to add the third element to an array which represents for the weight, if the graph has more than 2 vertices, each edge we represent it in the sub-array inside all edges array. Since each edge contains just two or three numbers, the total space for an edge list is Θ(

Here is the way how we can represent the graph below in edge lists fashion:

```
var unweightedEdgeLists = [
[0, 4],
[0, 5],
[0, 2],
[1, 4],
[1, 5],
[2, 3],
[2, 4],
[4, 5]
]
```

Simply we write down 2 vertices of an edge of this graph into an array which are enclosed by the array which contains all the edges.

Or in this particular example above, this is a weighted graph, we do the same thing like the way we did with unweighted graph but just need to add the third element in each sub-array to represent that each edge has the weight:

```
var weightedEdgeLists = [
[0, 4, 2],
[0, 5, 3],
[0, 2, 1],
[1, 4, 4],
[1, 5, 5],
[2, 3, 6],
[2, 4, 7],
[4, 5, 8]
]
```

Edge lists are simple, but if we want to find whether the graph contains a particular edge, we have to search through the edge list. If the edges appear in the edge list in no particular order, that’s a linear search through vertical bar edges. Which leads us to learn about two other ways of graph representation below.

Here is another way to represent a graph, which is **adjacency matrix**. We simply just count the total vertices and write down the matrices (V is a set of finite vertices). The adjacency matrices for an undirected graph is symmetric like the one you will see underneath. We list all the vertices in rows and in columns where the entry in row *i *and column *j*, we put 1 only when this graph contains an edge (*i, j*), otherwise, we put 0 to the corresponding position:

0 | 1 | 2 | 3 | 4 | 5 | 6 | |

0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 |

1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 |

2 | 1 | 0 | 0 | 1 | 1 | 0 | 0 |

3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |

4 | 1 | 1 | 1 | 0 | 0 | 1 | 0 |

5 | 1 | 1 | 0 | 0 | 1 | 0 | 1 |

6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |

Or you can also represent it in a particular language, such as JavaScript:

```
var adjacencyMatrices = [
[0, 0, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 1, 1, 0],
[1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 1, 0],
[1, 1, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 0]
]
```

This kind of representation, adjacency matrices is predominant when compared with edge lists with respect to the time to find out whether a given edge is in the graph or not by looking up the corresponding matrix. With an adjacency matrix, we can find out whether an edge is present in **constant time**. We can query whether edge (*i*,*j*) is in the graph by looking at `adjacencyMatrices[i][j]`

== 1 or not, for example, let’s choose two vertices which are 0 (as *i*) and 2 (as *j*), we know there is an edge between two vertices by looking up the table above.

But there are some disadvantages for this implementation because we have to list a matrix with elements, so it takes space, even if the graph is **sparse** (which contains just a few numbers of edges) we still have to spend many spaces for 0s and just a few edges. Look at the example below:

Here we just can see 2 edges in this graph, which are {0, 2} and {5,6} but if we choose adjacency matrices as our representation, it should look like this:

```
var sparedGraph = [
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0]
]
```

And there is another disadvantage of adjacency matrices, presume that we want to find out which vertices are adjacent to a given vertex *i*. Like the example above, in the 6th row which is the row of vertex number 5, if we want to check how many vertices are adjacent to this vertex, we go through from begin to the end of this row, even we just found one vertex is adjected with this vertex number 5 in the last element.

**Adjacency lists** are the combination of Edge lists and Adjacency matrices. For each vertex *i *in a graph, we store an array of the vertices adjacent on it (or its neighbors). We typically have an array of |*V*| adjacency lists, one adjacency list per vertex. Now here is one more example:

Here now we have a directed graph, and because it’s not both ways, as can be seen, so we need to represent the graph in the correct order, for example, in this image, we have an edge like this (0, 1), not like this (1, 0):

```
var adjancencyLists = [
[1], // neighbor of vertex number 0
[3], // neighbor of vertex number 1
[3], // neighbor of vertex number 2
[4], // neighbor of vertex number 3
[1], // neighbor of vertex number 4
[1] // neighbor of vertex number 5
]
```

With the same graph but undirected, here is how we represent this graph:

```
var adjancencyLists = [
[1], // neighbor of vertex number 0
[3, 4, 5], // neighbors of vertex number 1
[3], // neighbor of vertex number 2
[1, 2, 4], // neighbors of vertex number 3
[1, 3], // neighbors of vertex number 4
[1] // neighbor of vertex number 5
]
```

Adjacency lists have several advantages. For a graph with *N *vertices and *M* edges, the memory used depends on *M*. This makes the adjacency lists ideal for storing sparse graphs. Generally, for a directed graph, the space we need to allocate to adjacency lists is elements, for undirected graph we need elements for this procedure because each edge appears exactly twice in the adjacency lists, once in list and once in list, and there are |E|edges. Let’s assume we have a directed graph with vertices and edges. Remember the adjacency matrices? In this case if we used adjacency matrices we had to store elements. Nevertheless, we just have to store 10000 vertices in case we use adjacency lists. Another advantage of adjacency lists is we can get to each vertex’s adjacency list in **constant time** because we just have to index into an array. In order to find a particular edge is present on the graph or not, we first need a constant time to get to the vertex then look for in neighbors. The time we find out an edge depends upon the total vertices and the **degree **of this vertex. For example, we want to find an edge , first, we need to traverse all the previous vertices until we reach the vertex number 5 then if 10 is the last vertex of this graph, we still have to traverse all of the previous vertices to get the vertex number 10 , but still is constant time.

The post What is Graph and its representation appeared first on Learn To Code Together.

]]>The post Introduction to Algorithms – 3rd Edition (free download) appeared first on Learn To Code Together.

]]>The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “Divide-and-Conquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edge-based flow in the material on flow networks. Many exercises and problems have been added for this edition. The international paperback edition is no longer available; the hardcover is available worldwide.

“In light of the explosive growth in the amount of data and the diversity of computing applications, efficient algorithms are needed now more than ever. This beautifully written, thoughtfully organized book is the definitive introductory book on the design and analysis of algorithms. The first half offers an effective method to teach and study algorithms; the second half then engages more advanced readers and curious students with compelling material on both the possibilities and the challenges in this fascinating field.”–Shang-Hua Teng, University of Southern California

“As an educator and researcher in the field of algorithms for over two decades, I can unequivocally say that the Cormen book is the best textbook that I have ever seen on this subject. It offers an incisive, encyclopedic, and modern treatment of algorithms, and our department will continue to use it for teaching at both the graduate and undergraduate levels, as well as a reliable research reference.”–Gabriel Robins, Department of Computer Science, University of Virginia

“Introduction to Algorithms,” the ‘bible’ of the field, is a comprehensive textbook covering the full spectrum of modern algorithms: from the fastest algorithms and data structures to polynomial-time algorithms for seemingly intractable problems, from classical algorithms in graph theory to special algorithms for string matching, computational geometry, and number theory. The revised third edition notably adds a chapter on van Emde Boas trees, one of the most useful data structures, and on multithreaded algorithms, a topic of increasing importance.”–Daniel Spielman, Department of Computer Science, Yale University

The post Introduction to Algorithms – 3rd Edition (free download) appeared first on Learn To Code Together.

]]>The post Big O Cheat Sheet for Common Data Structures and Algorithms appeared first on Learn To Code Together.

]]>First, we consider the growth rate of some familiar operations, based on this chart, we can visualize the difference of an algorithm with O(1) when compared with O(n^{2}). As the input larger and larger, the growth rate of some operations stays steady, but some grow further as a straight line, some operations in the rest part grow as exponential, quadratic, factorial.

In order to have a good comparison between different algorithms we can compare based on the resources it uses: how much time it needs to complete, how much memory it uses to solve a problem or how many operations it must do in order to solve the problem:

**Time efficiency:**a measure of the amount of time an algorithm takes to solve a problem.**Space efficiency:**a measure of the amount of memory an algorithm needs to solve a problem.**Complexity theory:**a study of algorithm performance based on cost functions of statement counts.

Sorting Algorithms | Space Complexity | Time Complexity | ||

Worst case | Best case | Average case | Worst case | |

Bubble Sort | O(1) | O(n) | O(n^{2}) | O(n^{2}) |

Heapsort | O(1) | O(n log n) | O(n log n) | O(n log n) |

Insertion Sort | O(1) | O(n) | O(n^{2}) | O(n^{2}) |

Mergesort | O(n) | O(n log n) | O(n log n) | O(n log n) |

Quicksort | O(log n) | O(n log n) | O(n log n) | O(n log n) |

Selection Sort | O(1) | O(n^{2}) | O(n^{2}) | O(n^{2}) |

ShellSort | O(1) | O(n) | O(n log n^{2}) | O(n log n^{2}) |

Smooth Sort | O(1) | O(n) | O(n log n) | O(n log n) |

Tree Sort | O(n) | O(n log n) | O(n log n) | O(n^{2}) |

Counting Sort | O(k) | O(n + k) | O(n + k) | O(n + k) |

Cubesort | O(n) | O(n) | O(n log n) | O(n log n) |

In this chart, we consult some popular data structures such as Array, Binary Tree, Linked-List with 3 operations Search, Insert and Delete.

Data Structures | Average Case | Worst Case | ||||

Search | Insert | Delete | Search | Insert | Delete | |

Array | O(n) | N/A | N/A | O(n) | N/A | N/A |

AVL Tree | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) |

B-Tree | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) |

Binary SearchTree | O(log n) | O(log n) | O(log n) | O(n) | O(n) | O(n) |

Doubly Linked List | O(n) | O(1) | O(1) | O(n) | O(1) | O(1) |

Hash table | O(1) | O(1) | O(1) | O(n) | O(n) | O(n) |

Linked List | O(n) | O(1) | O(1) | O(n) | O(1) | O(1) |

Red-Black tree | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) | O(log n) |

Sorted Array | O(log n) | O(n) | O(n) | O(log n) | O(n) | O(n) |

Stack | O(n) | O(1) | O(1) | O(n) | O(1) | O(1) |

The order of growth of the running time of an algorithm gives a simple characterization of the algorithm’s efficiency and also allows us to compare the relative performance of alternative algorithms.

Below we have the function `n f(n)`

with n as an input, and beside it we have some operations which take input `n`

and return the total time to calculate some specific inputs.

n f(n) | log n | n | n log n | n^{2} | 2^{n} | n! |
---|---|---|---|---|---|---|

10 | 0.003ns | 0.01ns | 0.033ns | 0.1ns | 1ns | 3.65ms |

20 | 0.004ns | 0.02ns | 0.086ns | 0.4ns | 1ms | 77years |

30 | 0.005ns | 0.03ns | 0.147ns | 0.9ns | 1sec | 8.4×10^{15}yrs |

40 | 0.005ns | 0.04ns | 0.213ns | 1.6ns | 18.3min | — |

50 | 0.006ns | 0.05ns | 0.282ns | 2.5ns | 13days | — |

100 | 0.07 | 0.1ns | 0.644ns | 0.10ns | 4×10^{13}yrs | — |

1,000 | 0.010ns | 1.00ns | 9.966ns | 1ms | — | — |

10,000 | 0.013ns | 10ns | 130ns | 100ms | — | — |

100,000 | 0.017ns | 0.10ms | 1.67ms | 10sec | — | — |

1’000,000 | 0.020ns | 1ms | 19.93ms | 16.7min | — | — |

10’000,000 | 0.023ns | 0.01sec | 0.23ms | 1.16days | — | — |

100’000,000 | 0.027ns | 0.10sec | 2.66sec | 115.7days | — | — |

1,000’000,000 | 0.030ns | 1sec | 29.90sec | 31.7 years | — | — |

The post Big O Cheat Sheet for Common Data Structures and Algorithms appeared first on Learn To Code Together.

]]>The post n-th Fibonacci Number: Recursion vs. Dynamic Programming appeared first on Learn To Code Together.

]]>Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. Generally, recursion is the process in which a function calls itself directly or indirectly and the corresponding function is called a recursive function. I will examine the typical example of finding the n-th Fibonacci number by using a recursive function.

In mathematics, the Fibonacci sequence is the sequence in which the first two numbers are 0 and 1 and with each subsequent number being determined by the sum of the two preceding ones. That is

and

for *n* > 1

The Fibonacci sequence might look like this (the first 0 number is omitted):

** 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …**

Given a number n, write a program that finds the corresponding n-th Fibonacci Number. For example:

Input: n = 2

Output: 1

Input: n = 3

Output: 2

Input: n = 7

Output: 13

Now, we write a program to find the n-th number by using recursion (naive recursive algorithm) in JavaScript:

```
function nthFibonacci(n) {
if (n < 2) { // if n < 2, then return its value
return n;
} else {
return nthFibonacci(n - 1) + nthFibonacci(n - 2) // calling function recursively until reach the certain condition.
}
}
nthFibonacci(7) // 13
```

In Windows, press F12 or Ctrl + Shift + J to open the dev console to run this code above (Cmd + Option + J in MacOS).

In this code above, we defined a function called

, which took **nthFibonacci**`n`

as an input and return the `n-th`

number in the Fibonacci sequence. Inside this function, first we check if the input is smaller than 2 or not, if it’s we simple just return this number, otherwise we use the recursive function to make the function call itself repetitively until the input is less than 2. Here is what happened in this function, presume we typed 5 as an input, the recursive diagram should look like this:

In order to find the 5th number, by looking back to the code above, because 5 is greater than 2, then `nthFibonacci(5) = nthFibonacci(4) + nthFibonacci(3)`

, but with `nthFibonacci(4)`

and `nthFibonacci(3)`

which are still larger than 2, then the process of recursing will be continued until the condition is reached. We can observe how it works by this diagram above, but for better understanding, look at the following things below:

`nthFibonacci(5) = nthFibonacci(n - 1) + nthFibonacci(n - 2)`

`nthFibonacci(5) = nthFibonacci(4) + nthFibonacci(3)`

- Then we have:
`nthFibonacci(4) = nthFibonacci(3) + nthFibonacci(2)`

`nthFibonacci(3) = nthFibonacci(2) + nthFibonacci(1)`

- so,
`nthFibonacci(5) = (nthFibonacci(3) + nthFibonacci(2)) + (nthFibonacci(2) + nthFibonacci(1))`

- The function then further recursively:
`nthFibonacci(3) = nthFibonacci(2) + nthFibonacci(1)`

`nthFibonacci(2) = nthFibonacci(1) + nthFibonacci(0)`

- then,
`nthFibonacci(5) = (nthFibonacci(2) + nthFibonacci(1)) + (nthFibonacci(1) + nthFibonacci(0)) + (nthFibonacci(1) + nthFibonacci(0) + nthFibonacci(1))`

- The function again calls itself to resolve to
`nthFibonacci(2)`

, it would not do the same thing with`nthFibonacci(1)`

and`nthFibonacci(0)`

because the value of n which are 1 and 0 is less than 2: `nthFibonacci(5) = ((nthFibonacci(1) + nthFibonacci(0)) + nthFibonacci(1)) + (nthFibonacci(1) + nthFibonacci(0)) + (nthFibonacci(1) + nthFibonacci(0) + nthFibonacci(1))`

- Now, after a lot of recursions, we finally reach the result as the diagram corresponding to the tree structure showed above. For simplifying, I write
`nthFibonacci(5)`

as`f(5)`

:

f(5) = f(1) + f(0) + f(1) + f(1) + f(0) + f(1) + f(0) + f(1) = 1 + 0 + 1 + 1 + 0 + 1 + 0 + 1 = 5

Hopefully, those things I wrote above make sense and now you understood how recursion works in finding the Fibonacci number.

When measuring the efficiency of an algorithm, typically we want to compute how fast is it algorithm with respect to time complexity. By using the recursive function, the finding n-th Fibonacci number is addressed, it is a proper algorithm, but is it considered a good algorithm? Definitely **no**. I changed the color of each function in the diagram on purpose, as you can see, the `nthFibonacci(3)`

repeated 2 times, `nthFibonacci(2)`

repeated 3 times, 5 times for `nthFibonacci(1)`

and 3 times for `nthFibonacci(0)`

. Due to a lot of repeated work, the time to execute this function would be increased.

In the diagram, after each time the function decrement, the function gets double bigger until it reaches 1 or 0. The time complexity of this algorithm is:

```
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
...
2^n-1
```

T(n) = T(n – 1) + T(n -2) + 1 = 2^n = O(2^n). This algorithm grows as exponential. The time complexity of this algorithm is labeled as horrible, according to this chart:

Finding the n-th Fibonacci number with recursion it could be horrible when the input is large, the time wastes for this calculation could be unacceptable. The are many other alternative solutions to find the n-th Fibonacci number, including a technique called Dynamic Programming, the idea of this approach is simple to avoid the repeated work and store the sub-problems result so you don’t need to calculate it again.

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields. It’s the technique to solve the **recursive** problem in a more efficient manner. Many times in recursion we solve the problem repeatedly, with dynamic programming we store the solution of the sub-problems in an array, table or dictionary, etc…that we don’t have to calculate again, this is called **Memoization**. Simply put, dynamic programming is just memoization and re-use solutions.

In order to determine whether a problem can be solved in dynamic programming, there are 2 properties we need to consider:

- Overlapping Subproblem
- Optimal Structure

If the problem we try to solve has those two properties, we can apply dynamic programming to address it instead of recursion.

As the name suggests, in recursion, overlapping sub-problems is when we calculate the same sub-problems over and over again. Instead of this redundant work, in dynamic programming, we solve the sub-problems only once and store those results for the latter use.

Now you see this tree structure again, with recursion, there are many times we had to re-calculate the sub-problems.

If the problem can be solved by using the solution of its sub-problems we then say this problem has optimal structure. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem.

There are usually 7 steps in the development of the dynamic programming algorithm:

- Establish a recursive property that gives the solution to an instance of the problem.
- Develop a recursive algorithm as per recursive property
- See if the same instance of the problem is being solved again an again in recursive calls
- Develop a memoized recursive algorithm
- See the pattern in storing the data in the memory
- Convert the memoized recursive algorithm into an iterative algorithm (optional)
- Optimize the iterative algorithm by using the storage as required (storage optimization)

Finding n-th Fibonacci number is ideal to solve by dynamic programming because of it satisfies of those 2 properties:

- First, the sub-problems were calculated over and over again with recursion.
- Second, we can solve the problem by using the result of its sub-problems.

There are 2 approaches in Dynamic Programming:

**Bottom-up****Top-down**

Presume we need to solve a problem for N, we start with the smallest possible input and that solution for future use. By doing that, you can use the stored solution to calculate the bigger problem.

In the Fibonacci example, if we have to find the n-th Fibonacci number then we will start with the two smallest value which is 0 and 1, then gradually we can calculate the bigger problems by re-use the result, here is the code example for finding the n-th Fibonacci number using Dynamic Programming with the bottom-up approach:

```
function nthFibDP(n) {
let memo = new Array(n + 1);
memo[0] = 0;
memo[1] = 1;
for (let i = 2; i <= n; i++) {
memo[i] = memo[i - 1] + memo[i - 2];
}
return memo[n];
}
nthFibDP(6) // 8
```

Let’s break down to see what happened:

- First, inside the
`nthFibDP`

function, we create a new array in order to store the solution of the sub-problems, and the size of this array is n + 1 because we spent the first index for the 0 number. - In the first two indices 0 and 1, the value of each index is 0 and 1 respectively.
- Then we generated a loop that iterated from 2 to n (including n), inside that loop, by the bottom-up approach we calculate the smaller values then increase 1 bigger after each iteration by calculating the two preceding ones of this index, if the values of two preceding indices already exist in the array, then we use those values otherwise we have to calculate.
- Because all the sub-problems’ solution were stored in the
`memo`

array, so the last element of this array was our solution, which led to why return`memo[n]`

.

And hereby is the explanation of the bare words, looks much like pseudocode:

```
n = 6;
f[0] = 0
f[1] = 1
f[] = [0, 1, x5 empty] (because n + 1)
for(i = 2; i <= 6; i++) (i <= n, n = 6)
f[2] = f[1] + f[0]
f[2] = 1 + 0 = 1 (push the result to the correspon index)
f[] = [0, 1, 1, x4 empty] (after adding f[2])
f[3] = f[2] + f[1] (we just find the solution for f[2] = 1)
f[3] = 1 + 1 = 2 (push the result to the 3rd index)
f[] = [0, 1, 1, 2, x3 empty] (after adding f[3])
f[4] = f[3] + f[2] (we just find the solution for f[3] = 2)
f[4] = 2 + 1 = 3 (push the result to the 4rd index)
f[] = [0, 1, 1, 2, 3, x2 empty] (after adding f[4])
f[5] = f[4] + f[3] (we just find the solution for f[4] = 3)
f[5] = 3 + 2 = 5 (push the result to the 5rd index)
f[] = [0, 1, 1, 2, 3, 5, x1 empty] (after adding f[5])
f[6] = f[5] + f[4] (we just find the solution for f[5] = 5)
f[6] = 5 + 3 = 8 (push the result to the 6rd index)
Now the array is: f[] = [0, 1, 1, 2, 3, 5, 8] (after adding f[6])
Return f[n] <=> f[6] = 8
```

To take a closer look, in your browser, open the console following keystroke F12 or Ctrl + Shift + J on Windows and Cmd + Option + J on macOS, paste the JavaScript code above to the console and add the keyword `debugger`

inside your function and hit enter:

You can see in detail how this function is run by clicking to the arrow down symbol that I highlighted inside the circle red color.

While the bottom-up approach starts with the smallest input and stores it for the future use to calculate the bigger one. Instead, top-down breaks the large problem into multiple subproblems from top to bottom, if the sub-problems solved already just reuse the answer. Otherwise, solve the subproblem and store the result. The top-down approach uses memoization to avoid recomputing the sub-problems.

Let’s let us again write n-th Fibonacci number by using the top-down approach:

```
function topDownFibonacci(n) {
// pretty much like bottom-up, first create the array to keep the fibonacci numbers
let saveF = new Array(n + 1);
// if n > 0, return value at this index
if (saveF[n] > 0) return saveF[n];
if (n <= 1) return n;
// Recursion
saveF[n] = topDownFibonacci(n - 1) + topDownFibonacci(n - 2)
return saveF[n];
}
topDownFibonacci(7) // 13
```

This top-down approach looks much like the recursion method, but instead of re-computing a lot of sub-problems, we store the sub-problems we already compute in an array, and every sub-problems just need to be calculated once and can be reused for later calculation.

As you can see, a lot of repeated works have been eliminated hence save more time and space. You might wonder, you still can see some of the sub-problems repeated but remember, those problems just compute once and being reused.

Both bottom-up and top-down use the technique tabulation and memoization to store the sub-problems and avoiding re-computing the time for those algorithms is **linear time**, which has been constructed by:

Sub-problems = n

Time/sub-problems = constant time = O(1)

Time complexity = Sub-problems x Time/sub-problems = O(n)

Comparing linear time with the exponential time of recursion, that is much better, right?

There are many other ways to find the n-th Fibonacci number, even better than Dynamic Programming with respect to time complexity also space complexity, I will also introduce to you one of those by using a formula and it just takes a constant time O(1) to find the value:

F_{n} = {[(√5 + 1)/2] ^ n} / √5

```
function nthFibConstantTime(n) {
let phi = (1 + Math.sqrt(5)) / 2;
return Math.round(Math.pow(phi, n) / Math.sqrt(5));
}
nthFibConstantTime(9) // 34
```

Recursion is a method to solve problems by allowing function calls itself repeatedly until reaching a certain condition, the typical example of recursion is finding the n-th Fibonacci number, after each recursion, it has to calculate the sub-problems again so this method lacks of efficiency, which has time complexity as O(2&supn;) (exponential time) so it’s a bad algorithm. Hence, another approach has been deployed, which is dynamic programming – it breaks the problem into smaller problems and stores the values of sub-problems for later use. To determine whether a problem can be solved with dynamic programming we should define is this problem can be done recursively and the result of the sub-problems can help us solve this problem or not. Dynamic programming is not something fancy, just about memoization and re-use sub-solutions. 2 techniques to solve programming in dynamic programming are Bottom-up and Top-down, both of them use O(n) time, which is much better than recursion O(2^n). There are also many ways to solve the n-th Fibonacci number problem, which just takes O(log N) or O(1).

The post n-th Fibonacci Number: Recursion vs. Dynamic Programming appeared first on Learn To Code Together.

]]>The post Imperative vs. Declarative (Functional) Programming. What is the difference? appeared first on Learn To Code Together.

]]>

Imperativeprogramming is likehowyou do something, anddeclarativeprogramming is more likewhatyou do, or something.”

Both imperative and declarative programming are classified as the common **programming paradigms** (programming paradigms are a way to classify programming languages based on their features).

Imperative programming is like when you ask your friend to write an essay for you, you give him detailed instructions and how should he write it even he doesn’t demand it to get the desired result.

Looking back at the definition above, we see imperative programming is like **how** we do something, to envision what it is, let’s make a real-world example first.

**Example: I’m in the park which nearby your house but I still cannot find a way to get your house. How do I get your house from here?**

**Response:** **Find the entrance of the park, and stand from the connecting point toward the road, then go straight to the right-hand 400m you will see an intersection, then you continue to turn right, go straight for another 200m and go to 7830 Pineknoll Road, which is my home address.**

So, we reach the desired location, which is a home address by giving detail instructions on how to get it.

Likewise, **Imperative** programming is a programming paradigm that uses a sequence of statements to determine **how** to achieve a certain goal. Imperative programming includes procedural programming and object-oriented paradigms, but in the scope of this article, we don’t talk about those concepts much.

The conspicuous examples of imperative programming are **for, while loops, if, else, classes, objects**.

Now, let’s do some imperative programming style examples in JavaScript:

- Presume we have an array with some elements and want to return a new array with double value for each array’s item:

```
function double(arr) {
let rs = [];
for (let i = 0; i < arr.length; i++) {
rs.push(arr[i] * 2);
}
return rs;
}
double([2, 3, 4]); // [4, 6, 8]
```

In this example above, we reached a certain goal by giving a sequence of statements in which we used for loop to loop from the first element of the array to the end and each of the iteration we double the value of each item and put it into the `rs`

variable, finally we return the result, `rs`

.

2. Now, let’s write a function named `factorial`

, which takes a number as an input and resulting in the factorial result of this number, how do we get it by using for loop:

```
function factorial(n) {
let total = 1;
if (n == 0) {
return 1;
}
for (i = 0; i < n; i++) {
total = total * (n - i);
}
return total;
}
factorial(5); // 120
```

In the example above, as we can observe, to reach the goal we again need to use a for loop, and also if statement. First, we initialized a variable that holds the result named `total`

and we checked the input, if input equals 0, then immediately return 1, but if not, the next sequence of code was being proceeded. Inside the `for loop`

, we iterated from `0`

to `n - 1`

, then the result was being added to `total`

after each iteration. After all, we return the result. The reason inside a loop we didn’t set the condition as

, instead, we just did **i <= n**

because, if **i < n**** i <= n**, then:

```
total = 1 * (5 - 0);
total = 5 * (5 - 1);
total = 20 * (5 - 2);
total = 60 * (5 - 3);
total = 120 * (5 - 4);
total = 120 * (5 - 5);
total = 0
```

3. Write another function, which called `addSum`

which takes in an array and returns the result of adding up every item in the array:

```
function addSum(arr){
let rs = 0;
for(let i = 0; i < arr.length; i++){
rs += arr[i];
}
return rs;
}
addSum([1, 2, 3, 4, 5]); // 15
```

As two of the examples above, in this example, with imperative style, we continued to use a for loop to loop through the array elements and adding them up to the `rs`

variable, then we return the result.

4. We again, write a function named `filterArr`

which takes an array as an argument and return its elements which are greater than 5:

```
function fitlerArr(arr){
let rs = [];
for(let i = 0; i < arr.length; i++){
if(arr[i] > 5){
rs.push(arr[i]);
}
}
return rs;
}
fitlerArr([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]); // [6, 7, 8, 9]
```

We get the desired result by giving the instructions, in which we create a for loop and filter the values which are greater than 5 then push those values to the `rs`

variable.

Declarative programming is a contrast programming paradigm compared with imperative programming. This paradigm emphasizes the concept of **what to do** rather than how to do it. Declarative programming is like when you ask your friend to write an essay, you don’t care how does he write it, you just want to get the result.

Now we look back to the real-world example as we mentioned earlier in the Imperative programming section:

**Example:** **I’m in the park which nearby your house but I still cannot find a way to get your house. How do I get your house from here?**

**Response: 7830 Pineknoll Road, Sykesville, MD 21784**

So, by this declarative response, I don’t care how does he get to my house’s location, I just give him the address then wait for him coming for me (the result).

Functional programming is a form of declarative programming, in which the desired result is declared as the value of a series of function applications. People often use those terms interchangeably. The outstanding example of a functional programming language is Haskell. If you want to learn more about functional programming or declarative programming, you should consider reading this article that gives you a reason to learn Haskell and also learning resources. Underneath, I also introduce some concepts and code examples in the Haskell programming language.

In JavaScript, besides supporting imperative programming, you can also write declarative (functional) programming at the same time by using some of those function `map`

, `reduce`

, `filter`

.

Now we listed 4 examples in imperative programming section again and see what we will do to make those declarative:

1. Double value of each item in an array:

In JavaScript:

```
function double(arr) {
return arr.map(element => element * 2);
}
double([1, 2, 3]) // [2, 4, 6]
```

In Haskell:

`map (*2) [1, 2, 3] -- [2, 4, 6]`

2. Write a factorial function with declarative style (Haskell):

```
factorial n = product [1..n]
factorial 5 -- 120
```

3. Write a function which takes in an array and returns the result of adding up every item in the array:

In JavaScript:

```
function reduceArr(arr){
return arr.reduce((prev, next) => prev + next, 0);
}
reduceArr([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]); // 55
```

In Haskell:

`sum [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] -- 55`

4. Filtering values of an array that greater than 5 and return the output:

In JavaScript:

```
function filterArr(arr){
return arr.filter(value => value > 5);
}
filterArr([1, 2, 3, 4, 5, 6, 7, 8, 9]); // [6, 7, 8, 9]
```

In Haskell:

`filter (>5) [1, 2, 3, 4, 5, 6, 7, 8, 9] -- [6, 7, 8, 9]`

NOTE: In Haskell, some values and then enclosed by the square braces `[]`

we call it a

not **list**

and **array**`list`

in Haskell is not the same as `array`

in JavaScript, for example if you want to get the length of an array in JavaScript which just takes the constant time to get the result, but finding the length of a list in Haskell, you have to traverse to all the element of the list and get the result, which means it takes the linear time in Haskell to get the length of the `list`

. Nonetheless, for simplify, I just call it an array for JavaScript as well as Haskell.

In all four of the examples above, we always get the desired output even we don’t care how does it get the values and don’t really know how it could be implemented. In the first example, we used `map`

to double every each item in the array. Second, we used `product`

function in Haskell to get the factorial number. Third, we used `reduce`

and `sum`

in JavaScript and Haskell respectively to get the sum value of all the array’s elements. Finally, we used `filter`

function to get some values in the array that passed the certain condition.

Some languages, such as SQL or HTML are also declarative languages, consider those examples:

`SELECT * FROM Users WHERE Country = ’Canada’;`

```
<h1>HTML is awesome!</h1>
<p>Hypertext Markup Language is the standard markup language for creating web pages and web applications. With Cascading Style Sheets and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web.</p>
```

As those examples above, we are achieving something without instructions on how to do it. The implementation of selecting all the users from Canada may be transparent to you and me. Displaying `h1`

heading and a paragraph without caring how the browser could parse your sentences and displaying it to the screen.

Now let’s sum up what we’ve learned into a comparison table:

Imperative | Declarative | |

Definition | a sequence of statements to determine how to reach a certain goal. | merely declares what to do to get the desired result, but not how to compute it. |

Clarity | Clear | Transparent, Abstract |

Flexibility | More flexible | Less flexible |

Complexity | Increases the complexity of a program | Simplifes the program |

Languages | C++, Java, Smalltalk, C#, etc… | HTML, Haskell, SQL, Agda, PureScript, Elm, etc… |

Example | ||

I’m in the park which nearby your house but I still cannot find a way to get your house. How do I get your house from here? | Find the entrance of the park, and stand from the connecting point toward the road, then go straight to the right-hand 400m you will see an intersection, then you continue to turn right, go straight for another 200m and go to 7830 Pineknoll Road, which is my home address. | 7830 Pineknoll Road, Sykesville, MD 21784 |

Ask a friend to write to you an essay | Give him detailed instructions on how should he write it to get the result. | Get the result. |

Double value of each item in an array | `function double(arr) {` | `map (*2) [1, 2, 3] -- [2, 4, 6]` |

Factorial | `function factorial(n) {` | `factorial n = product [1..n]` |

Sum of the array’s elements | `function addSum(arr){` | `sum [1, 2, 3, 4, 5] -- 15` |

Array’s items are greater than 5 | `function fitlerArr(arr){` | `filter (>5) [1, 2, 3, 4, 5, 6, 7, 8, 9] -- [6, 7, 8, 9]` |

The post Imperative vs. Declarative (Functional) Programming. What is the difference? appeared first on Learn To Code Together.

]]>The post Top must-know algorithms and data structures for computer science students appeared first on Learn To Code Together.

]]>**Data structures and algorithms are patterns for solving problems**. The more of them you have in your utility belt, the greater variety of problems you’ll be able to solve. You’ll also be able to come up with more elegant solutions to new problems than you would otherwise be able to. In fact, many effective algorithms had been designed and ready to use for many problems around us, such as Dijkstra’s algorithm – the algorithm designed for finding the shortest paths between nodes in a graph, QuickSort – efficient sorting algorithm for placing the elements of a random access file or an array in order, etc…so for the most part, your duty is trying to learn the common algorithms and applying those to appropriate problems. But one day you will create your own algorithm while solving problems, who knows?

**Also read:**

If you want to take a look a general view about the top 5 algorithms that dominate the world, check out this article.

- Insertion sort, Selection sort, Bubble sort
- Merge Sort, Quicksort
- Binary Search
- Breadth-First Search (BFS)
- Single-Source Shortest Paths-Dijkstra’s algorithm
- Depth First Search (DFS)
- Lee algorithm | Shortest path in a Maze
- Flood fill Algorithm
- Floyd’s Cycle Detection Algorithm
- Kadane’s algorithm
- Longest Increasing Subsequence
- Inorder, Preorder, Postorder
- Heap Sort
- Topological Sorting in a DAG
- Disjoint-Set Data Structure (Union-Find Algorithm)
- Kruskal’s Algorithm for finding Minimum Spanning Tree
- Single-Source Shortest Paths — Dijkstra’s Algorithm
- Secure Hash Algorithm (SHA)
- All-Pairs Shortest Paths — Floyd Warshall Algorithm

The algorithm and data structure are often tied together. Besides learning and acquiring algorithm, you, as a computer science student, should also know about some popular data structures as well. Here is the list:

- Linked-List
- Linked List Implementation | Part 1
- Linked List Implementation | Part 2
- Insertion in BST
- Search given key in BST
- Deletion from BST
- Hashing
- Stack, Queue
- Min Heap and Max Heap
- Graph Implementation using STL
- Graph Implementation in C++ without using STL
- Trie Implementation | Insert, Search and Delete
- Memory efficient Trie Implementation using Map | Insert, Search and Delete

At the end of the day, if you are an avid algorithm learner after familiar with the list of algorithms above, you should also find out some useful concepts such as Backtracking, Dynamic Programming, Divide & Conquer, Greedy Algorithms, which are really useful and worth your time.

The post Top must-know algorithms and data structures for computer science students appeared first on Learn To Code Together.

]]>