Hi! Today's lesson will be slightly different from the rest. It will differ in that it is only indirectly related to Java. That said, this topic is very important for every programmer. We're going to talk about algorithms. What is an algorithm? In simple terms, it is some sequence of actions that must be completed to achieve a desired result. We use algorithms often in everyday life. For example, every morning you have a specific task: go to school or work, and at the same time be:
• Clothed
• Clean
• Fed
What algorithm lets you achieve this result?
1. Wake up using an alarm clock.
2. Take a shower and wash yourself.
3. Make breakfast and some coffee or tea.
4. Eat.
5. If you didn't iron your clothes the previous evening, then iron them.
6. Get dressed.
7. Leave the house.
This sequence of actions will definitely let you get the desired result. In programming, we are constantly working to complete tasks. A significant portion of these tasks can be performed using algorithms that are already known. For example, suppose your task is to this: sort a list of 100 names in an array. This task is quite simple, but it can be solved in different ways. Here is one possible solution: Algorithm for sorting names alphabetically:
2. Find every name from our list in this dictionary.
3. On a piece of paper, write the page of the dictionary on which the name is located.
4. Use the pieces of paper to sort the names.
Will such a sequence of actions accomplish our task? Yes, it certainly will. Is this solution efficient? Hardly. Here we come to another very important aspects of algorithms: their efficiency. There are several ways to accomplish this task. But both in programming and in ordinary life, we want to choose the most efficient way. If your task is to make a buttered piece of toast, you could start by sowing wheat and milking a cow. But that would be an inefficient solution — it will take a lot of time and will cost a lot of money. You can accomplish your simple task by simply buying bread and butter. Though it does let you to solve the problem, the algorithm involving wheat and a cow is too complex to use in practice. In programming, we have special notation called big O notation to assess the complexity of algorithms. Big O makes it possible to assess how much an algorithm's execution time depends on the input data size. Let's look at the simplest example: data transfer. Imagine that you need to send some information in the form of a file over a long distance (for example, 5000 miles). What algorithm would be most efficient? It depends on the data you are working with. For example, suppose we have an audio file that is 10 MB. In this case, the most efficient algorithm is to send the file over the Internet. It couldn't take more than a couple of minutes! Let's restate our algorithm: "If you want to transfer information in the form of files over a distance of 5000 miles, you should send the data via the Internet". Excellent. Now let's analyze it. Does it accomplish our task? Well, yes, it does. But what can we say about its complexity? Hmm, now things are getting more interesting. The fact is that our algorithm is very dependent on the input data, namely, the size of the files. If we have 10 megabytes, then everything is fine. But what if we need to send 500 megabytes? 20 gigabytes? 500 terabytes? 30 petabytes? Will our algorithm stop working? No, all these amounts of data can indeed be transferred. Will it take longer? Yes, it will! Now we know an important feature of our algorithm: the larger the amount of data to send, the longer it will take to run the algorithm. But we would like to have a more precise understanding of this relationship (between the input data size and the time required to send it). In our case, the algorithmic complexity is linear. "Linear" means that as the amount of input data increases, the time it takes to send it will increase approximately proportionally. If the amount of data doubles, then the time required to send it will be twice as much. If the data increases by 10 times, then the transmission time will increase 10 times. Using big O notation, the complexity of our algorithm is expressed as O(n). You should remember this notation for the future — it is always used for algorithms with linear complexity. Note that we are not talking about several things that may vary here: Internet speed, our computer's computational power, and so on. When assessing the complexity of an algorithm, it simply doesn't make sense to consider these factors — in any event, they are outside our control. Big O notation expresses the complexity of the algorithm itself, not the "environment" that it runs in. Let's continue with our example. Suppose that we ultimately learn that we need to send files totaling 800 terabytes. We can, of course, accomplish our task by sending them over the Internet. There's just one problem: at standard household data transmission rates (100 megabits per second), it will take approximately 708 days. Almost 2 years! :O Our algorithm is obviously not a good fit here. We need some other solution! Unexpectedly, IT giant Amazon comes to our rescue! Amazon's Snowmobile service lets us upload a large amount of data to mobile storage and then deliver it to the desired address by truck! So, we have a new algorithm! "If you want to transfer information in the form of files over a distance of 5000 miles and doing so would take more than 14 days to send via the Internet, you should send the data on an Amazon truck." We chose 14 days arbitrarily here. Let's say this is the longest period we can wait. Let's analyze our algorithm. What about its speed? Even if the truck travels at only 50 mph, it will cover 5000 miles in just 100 hours. This is a little over four days! This is much better than the option of sending the data over the Internet. And what about the complexity of this algorithm? Is it also linear, i.e. O(n)? No, it is not. After all, the truck doesn't care how heavily you load it — it will still drive at about the same speed and arrive on time. Whether we have 800 terabytes, or 10 times that, the truck will still reach its destination within 5 days. In other words, the truck-based data transfer algorithm has constant complexity. Here, "constant" means that it does not depend on the input data size. Put a 1GB flash drive in the truck, it will arrive within 5 days. Put in disks containing 800 terabytes of data, it will arrive within 5 days. When using big O notation, constant complexity is denoted by O(1). We've become familiar with O(n) and O(1), so now let's look at more examples in the world of programming :) Suppose you are given an array of 100 numbers, and the task is to display each of them on the console. You write an ordinary `for` loop that performs this task
``````int[] numbers = new int;
// ...fill the array with numbers

for (int i: numbers) {
System.out.println(i);
}``````
What is the complexity of this algorithm? Linear, i.e. O(n). The number of actions that the program must perform depends on how many numbers are passed to it. If there are 100 numbers in the array, there will be 100 actions (statements to display strings on the screen). If there are 10,000 numbers in the array, then 10,000 actions must be performed. Can our algorithm be improved in any way? No. No matter what, we will have to make N passes through the array and execute N statements to display strings on the console. Consider another example.
``````public static void main(String[] args) {

We have an empty `LinkedList` into which we insert several numbers. We need to evaluate the algorithmic complexity of inserting a single number into the `LinkedList` in our example, and how it depends on the number of elements in the list. The answer is O(1), i.e. constant complexity. Why? Note that we insert each number at the beginning of the list. In addition, you will recall that when you insert a number into a `LinkedList`, the elements don't move anywhere. The links (or references) are updated (if you forgot how LinkedList works, look at one of our old lessons). If the first number in our list is `x`, and we insert the number y at the front of the list, then all we need to do is this:
``````x.previous  = y;
When we update the links, we don't care how many numbers are already in the `LinkedList`, whether one or one billion. The complexity of the algorithm is constant, i.e. O(1).