0% found this document useful (0 votes)
18 views6 pages

Greedy

The document discusses the importance of custom sorting in greedy algorithms, emphasizing the need for a comparator function that defines sorting rules based on specific criteria. It introduces techniques for proving the correctness of greedy algorithms, including the Stay Ahead, Exchange Argument, and Structural Argument methods. Additionally, it highlights the use of lambda functions in C++ for more concise sorting logic.

Uploaded by

hosssam.sabeer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views6 pages

Greedy

The document discusses the importance of custom sorting in greedy algorithms, emphasizing the need for a comparator function that defines sorting rules based on specific criteria. It introduces techniques for proving the correctness of greedy algorithms, including the Stay Ahead, Exchange Argument, and Structural Argument methods. Additionally, it highlights the use of lambda functions in C++ for more concise sorting logic.

Uploaded by

hosssam.sabeer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

🛠️ The Foundation: Custom Sorting

Nearly every greedy algorithm begins with a crucial step: sorting. But we rarely use a simple
numerical sort. Instead, we sort based on a special rule or a "custom comparison" that reflects
our greedy strategy.

The key is to write a comparator function. This function takes two items, a and b , and
decides which one should come first in the sorted list.

The Golden Rule of Comparators: Your function should return true if item a must come
before item b in the sorted order, and false otherwise.

Example: A Multi-layered Sort


Let's say you want to sort an array with a complex rule: all even numbers must come first,
sorted in descending order, followed by all odd numbers, sorted in ascending order.

For an input like [4, 5, 3, 6, 2, 1, 8, 7] , the output should be [8, 6, 4, 2, 1, 3, 5,


7] .

How do we write a comparator for a and b ?

1. Check Parity First: The most important rule is whether the numbers are even or odd.
If a is even and b is odd, a comes first. Return true .
If a is odd and b is even, b comes first. Return false .
2. Handle Same Parity: If both numbers are the same parity (both even or both odd), we
move to the next rule.
If both are even, we sort them descending. So, a comes first if a > b . Return a > b .
If both are odd, we sort them ascending. So, a comes first if a < b . Return a < b .

This logic ensures that any pair of numbers is sorted correctly according to our rules.
Understanding how to build these layers of logic is key to implementing complex greedy
strategies.

⚡ Supercharging Sorts: Lambda Functions in C++


Instead of writing a separate, named function for our comparator, modern C++ lets us use
lambda functions. A lambda is an anonymous, inline function you define right where you need
it. This makes your code more compact and easier to read, as the sorting logic is right next to
the sort call.

The basic syntax of a lambda looks like this: [capture_clause](parameters) { body } .

[] : The capture clause. For sorting, this is almost always empty.


(parameters) : The two elements to compare, e.g., (int a, int b) .
{ body } : The comparison logic, same as in a regular function.

C++ Example with sort


Here’s how to use a lambda for the complex sort from the previous section. Notice how we write
the entire logic inside the sort call!

int main() {
vector<int> vec = {6, 7, 3, 1, 3, 4, 5, 7 , 8};
sort(vec.begin(), vec.end(), [](int i, int j) {
// Check parity (even/odd)
int pi = (i % 2);
int pj = (j % 2);
if(pi == pj) {
if(pi == 0)
return i > j;
else
return i < j;
}
else {
if(pi == 0)
return true;
else
return false;
}
});
for (int num : vec) {
cout << num << " ";
}
cout << endl;
return 0;
}
🧠 Greedy Algorithms in Competitive
Programming
Welcome, future champions! This guide is for you, the talented competitive programmers of the
Menofia community. Mastering greedy algorithms is a huge step in your ICPC journey, and this
document will help you understand the core reasoning behind them.

A greedy algorithm builds up a solution piece by piece, always choosing the option that looks
best at the moment. It makes a locally optimal choice hoping it will lead to a globally optimal
solution. While this simple approach doesn't work for every problem, when it does, it's often the
fastest and most elegant solution.

Here, we'll master three key techniques to prove a greedy algorithm is correct:

1. Stay Ahead
2. Exchange Argument
3. Structural Argument

Let's get started! 🚀

🚀 1. Stay Ahead
The "Stay Ahead" argument is a proof technique where you show that your greedy solution is
always at least as good as any other optimal solution at every single step. If your greedy
solution is always "in the lead," it must end up being optimal.

How it Works:
1. Define a Measure: Pick a metric to compare your greedy solution against any other optimal
solution.
2. Show the Greedy Choice is Best: Prove that the first greedy choice puts you in the best
possible position according to your measure.
3. Use Induction: Argue that after any number of steps, the greedy solution remains "ahead"
of any other solution.
4. Conclude Optimality: If the greedy solution is always ahead, it's guaranteed to be optimal.

Classic Example: Activity Selection Problem


Problem: Given a set of activities, each with a start time s_i and finish time f_i , select the
maximum number of non-overlapping activities.
Greedy Strategy: Sort activities by their finish times in ascending order. Pick the first activity.
Then, iterate through the rest and pick the next activity that doesn't overlap with the last one
you chose.

Stay Ahead Proof:


Our greedy choice is to pick the activity that finishes earliest. This choice leaves the maximum
possible time for all other activities. We can show that at any step k , the finish time of the k -th
activity chosen by our greedy algorithm is always less than or equal to the finish time of the k -
th activity in any other valid schedule. By always staying "ahead" and finishing earlier, our
greedy algorithm ensures it can fit in the maximum number of activities.

🤝 2. Exchange Argument
The "Exchange Argument" is a powerful proof technique. The idea is to assume there's a better
solution than the greedy one and then show that this is impossible by making "exchanges" that
transform it into the greedy solution without making it worse.

How it Works:
1. Assume a Better Solution: Start by assuming an optimal solution exists that is different
from your greedy solution.
2. Find an "Inversion": Find the first place where this optimal solution violates the greedy
rule.
3. Make an "Exchange": Swap elements in the assumed optimal solution to make it more like
the greedy solution.
4. Show Improvement: Prove that this swap leads to a better or equally good result. This
creates a contradiction, proving the initial solution couldn't have been optimal.
5. Conclude Optimality: Since any non-greedy solution can be improved, the greedy one
must be optimal.

Classic Example: Minimum Dot Product


Problem: Given two arrays, A and B , of the same size, permute them to minimize the sum of
products: S = Σ (a_i * b_i) .

Greedy Strategy: To minimize the sum, multiply the smallest numbers from one array with the
largest from the other. The strategy is to sort one array (A) in ascending order and the other
(B) in descending order.
Exchange Argument Proof:
Assume there's an optimal pairing that isn't the greedy one. This means for a sorted A , B is
not sorted descendingly. So, there must be at least two pairs, (a_i, b_i) and (a_j, b_j) ,
where a_i < a_j but b_i < b_j .

Let's check the sum for this part: Sum_before = a_i*b_i + a_j*b_j .
Now, let's swap them to follow the greedy rule: Sum_after = a_i*b_j + a_j*b_i .
The difference is Sum_after - Sum_before = (a_i - a_j) * (b_j - b_i) .
Since a_i < a_j , the first term (a_i - a_j) is negative.
Since b_i < b_j , the second term (b_j - b_i) is positive.
A negative number times a positive number is negative. So, Sum_after < Sum_before .

The swap reduced the total sum, contradicting our assumption that the initial pairing was
optimal. Therefore, the greedy strategy is the only optimal one.

🏛️ 3. Structural Argument
This method is less about a formal template and more about deeply understanding the
problem's underlying structure. Problems solvable by greedy algorithms often have special
properties that, if you can identify them, serve as your proof.

Key Structural Properties:


Greedy Choice Property: A global optimum can be reached by making a local optimum
choice. The first choice made by the greedy algorithm must be part of some optimal
solution.
Optimal Substructure: An optimal solution to the whole problem contains optimal solutions
to its subproblems.

How to Use It:


1. Analyze the Problem: Break down its constraints and goals.
2. Formulate a Greedy Hypothesis: Come up with a greedy strategy.
3. Identify Key Properties: Argue that your problem has the greedy choice property and
optimal substructure.
4. Justify Your Strategy: Use these properties to explain why your local choices lead to a
global optimum.

Classic Example: Assign Cookies


Problem: You have a list of children, each with a "greed factor" which is the minimum size
cookie they'll accept. You also have a list of cookies with different sizes. Your goal is to make
the maximum number of children happy.

Greedy Strategy: Give the smallest possible cookie to the least greedy child that it will
satisfy. To do this, sort both the greed factors and the cookie sizes in ascending order.

Structural Argument:
The structure of this problem is all about "value density."

Greedy Choice Property: It is always best to take the item with the highest value-to-weight
ratio. Any other choice is suboptimal. If you were to leave a bit of the "best" item to take a
piece of a "worse" item, you could always improve your total value by swapping them back.
This proves the greedy choice is correct.
Optimal Substructure: After you fill part of your knapsack with the best item, the remaining
problem—filling the rest of the knapsack with the rest of the items—is just a smaller version
of the original problem.

By understanding the problem's structure, you can confidently conclude that the greedy
approach works.

You might also like