Blog 206 — Sorting and Algorithms
- Tell us about something you learned this week.
We learning about sorting algorithms. A sorting algorithm is an algorithm used to reorder items in a list or an array according to a specific requirement. For example, sorting algorithms can organize an array of items from smallest to largest.
An efficient sorting algorithm is important for optimizing the efficiency of other algorithms (such as search and compression algorithms).
Sorting algorithms are made up of a series of instructions. They take an array or list as an input, perform operations, and output a sorted array.
There are a number of popular sorting algorithms. The nine most popular are:
- Bubble sort
- Insertion sort
- Merge sort
- Selection sort
- Counting sort
- Bucket sort
- Radix sort
2. What are the pros and cons of immutability?
In object oriented programming, An immutable object is an object whose state, once initialized by the constructor, cannot be modified throughout the lifetime of the object.
- An immutable object does not expose methods that can change its state.
- An immutable object can not be extended.
- All fields must be marked final.
- Immutable classes may not expose mutable fields, either through getters or through public fields.
3. How can you achieve immutability in your own code?
4. What are Divide and Conquer algorithms? Describe how they work. Can you give any common examples of the types of problems where this approach might be used?
- Divide: This involves dividing the problem into some sub problem.
- Conquer: Sub problem by calling recursively until sub problem solved.
- Combine: The Sub problem Solved so that we will get find problem solution.
Divide and Conquer Algorithm | Introduction - GeeksforGeeks
In this article, we are going to discuss how Divide and Conquer technique is helpful and how we can solve the problem…
DAC(a, i, j)
if(small(a, i, j))
return(Solution(a, i, j))
m = divide(a, i, j) // f1(n)
b = DAC(a, i, mid) // T(n/2)
c = DAC(a, mid+1, j) // T(n/2)
d = combine(b, c) // f2(n)
5. How do insertion sort, heap sort, quick sort, and merge sort work?
Sorting in programming involves placing elements in a list or an array in a certain order. Efficient sorting is important for optimizing other algorithms that require input data to be in sorted lists.
While you may not be required to implement a sorting algorithm in your day-to-day as a software developer, it’s important to know how some of these algorithms work internally. These are common for coding interviews and make you a more efficient developer.
6. What are the key advantages of insertion sort, quick sort, heap sort and merge sort? Discuss best, average, and worst case time and memory complexity.
Quick sort — is an internal algorithm which is based on divide and conquer strategy.
Insertion sort — The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are picked and placed at the correct position in the sorted part.
Heap sort — Heap sort is a comparison based sorting technique based on Binary Heap data structure. It is similar to selection sort where we first find the maximum element and place the maximum element at the end. We repeat the same process for the remaining elements.
Merge sort — is an external algorithm and based on divide and conquer strategy.
7. Explain the difference between mutable and immutable objects.
Mutability and Immutability are core principles in various programming paradigms.
Mutable types have individual identities and are compared by reference, that is, variables hold references to objects, or rather they point to the memory location of an object. Immutable types don’t have individual identities hence are compared by value, i.e. the value in the variable and not the location they are referencing.
8. What are the three laws of algorithm recursion? Describe them in your own words and what they mean to you.
- A recursive algorithm must have a base case.
- is typically a problem that is small enough to solve directly. In the
listsumalgorithm the base case is a list of length 1
2. A recursive algorithm must change its state and move toward the base case.
- some data that the algorithm is using is modified. Usually the data that represents our problem gets smaller in some way. In the
listsumalgorithm our primary data structure is a list, so we must focus our state-changing efforts on the list.
3. A recursive algorithm must call itself, recursively.
- you have learned that functions are good because you can take a large problem and break it up into smaller problems. The smaller problems can be solved by writing a function to solve each problem. When we talk about recursion it may seem that we are talking ourselves in circles. We have a problem to solve with a function, but that function solves the problem by calling itself! But the logic is not circular at all; the logic of recursion is an elegant expression of solving a problem by breaking it down into a smaller and easier problems.